LogoThread Easy
  • 探索
  • 線程創作
LogoThread Easy

Twitter 線程的一站式夥伴

© 2025 Thread Easy All Rights Reserved.

探索

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Quick overview of HOC / HVM / Bend 's state:
- about 1 year ago, we launched Bend1
- first lang to run closures + fast obj allocator on GPU
- near-ideal speedup up to 10000+ cores
- based on HVM2, a strict runtime for Interaction Nets

Problems:
- interpretation overhead still significant
- full RTX 4090 to beat 1-core OCaml / JavaScript / etc.
- big practical limitations (int24, no IO, no packages)
- despite Python syntax, it was still hard to use
- turns out most devs can't think recursively
- incompatible with lazy evaluation (not β-optimal!!)

I was disappointed by the problems above. At the same time, I was growingly optimistic about the application of optimal evaluation to the problem of program synthesis, which is a cornerstone of Symbolic AI - a failed idea, but with a strong feeling of "I can fix it".

I made a decision: throw HVM2 away (💀) and go back to the HVM1 roots, which was based on my "Interaction Calculus", and featured β-optimality. I heavily polished and improved it, resulting on HVM3, a prototype written in Haskell. I then used it to understand and research program synthesis on optimal evaluators. This was HARD, and cost about a year of my life, but results were positive, and our system now beats all published alternatives in efficiency and capabilities.

Now, we're taking all that and solidifying it, by implementing the runtime / compiler in raw C, so it can run as efficiently as possible on our humble Mac Mini cluster (🥹), and serve it to the world via an API.

I expected to launch by October, but there are still some challenges that cost me more time than I anticipated. For one, finding Lean proofs with SupGen requires very careful handling of superpositions, and doing that on C is actually HARD AS HELL - but things are moving steadily and we have a lot done already, and I still expect to launch Bend2 / HVM4 this year or Q1 2026.

Bend2 will have:
- parallel CPU runtime with lazy/optimal mode (!!!)
- 16 / 32 / 64 bit ints, uints and floats (finally)
- arbitrary IO via lightweight C interop (like Zig!)
- no CUDA yet, due to lack of time, very doable though
- most importantly: SupGen integration

SupGen is something new and the main novelty behind Bend2. It is *not* a traditional AI, it is a whole new thing capable of generating code based on examples and specs. I think many (in special, these in deep learning) will be caught totally off guard by how much we can accomplish with pure symbolic search, and, more than anything else, I can't wait to watch that reaction

Quick overview of HOC / HVM / Bend 's state: - about 1 year ago, we launched Bend1 - first lang to run closures + fast obj allocator on GPU - near-ideal speedup up to 10000+ cores - based on HVM2, a strict runtime for Interaction Nets Problems: - interpretation overhead still significant - full RTX 4090 to beat 1-core OCaml / JavaScript / etc. - big practical limitations (int24, no IO, no packages) - despite Python syntax, it was still hard to use - turns out most devs can't think recursively - incompatible with lazy evaluation (not β-optimal!!) I was disappointed by the problems above. At the same time, I was growingly optimistic about the application of optimal evaluation to the problem of program synthesis, which is a cornerstone of Symbolic AI - a failed idea, but with a strong feeling of "I can fix it". I made a decision: throw HVM2 away (💀) and go back to the HVM1 roots, which was based on my "Interaction Calculus", and featured β-optimality. I heavily polished and improved it, resulting on HVM3, a prototype written in Haskell. I then used it to understand and research program synthesis on optimal evaluators. This was HARD, and cost about a year of my life, but results were positive, and our system now beats all published alternatives in efficiency and capabilities. Now, we're taking all that and solidifying it, by implementing the runtime / compiler in raw C, so it can run as efficiently as possible on our humble Mac Mini cluster (🥹), and serve it to the world via an API. I expected to launch by October, but there are still some challenges that cost me more time than I anticipated. For one, finding Lean proofs with SupGen requires very careful handling of superpositions, and doing that on C is actually HARD AS HELL - but things are moving steadily and we have a lot done already, and I still expect to launch Bend2 / HVM4 this year or Q1 2026. Bend2 will have: - parallel CPU runtime with lazy/optimal mode (!!!) - 16 / 32 / 64 bit ints, uints and floats (finally) - arbitrary IO via lightweight C interop (like Zig!) - no CUDA yet, due to lack of time, very doable though - most importantly: SupGen integration SupGen is something new and the main novelty behind Bend2. It is *not* a traditional AI, it is a whole new thing capable of generating code based on examples and specs. I think many (in special, these in deep learning) will be caught totally off guard by how much we can accomplish with pure symbolic search, and, more than anything else, I can't wait to watch that reaction

Also forgot to mention: - Bend2 will export to JavaScript / Haskell so you can use it to write normal apps without having to wait for support on Bend's ecosystem - Bend2 will, sadly, break a promise: "if it can run in parallel, it will run in parallel". That's because this promise is *obviously* incompatible with lazy evaluation (either you wait to see if an expression will be visible, or you reduce it in parallel - can't have both). I still want to offer a full strict mode as a direct update to HVM2 in the future, but time is short that's not our focus right now ): on the bright side, I believe we'll be able to run lazy mode on GPUs. In practice, I believe this will be much better than full strict parallelism - our WeFunder campaign is still active but I'm not actively following it, and will close after launch

avatar for Taelin
Taelin
Mon Nov 03 19:06:10
we are still taking side event listings: every night from nov 17-23 there's hundreds of AI Engineers we can send to you! just put up a luma or partiful and drop it here.  

https://t.co/BoMjesniNl
the big one is @iporollo doing the epic AIE x @cerebral_valley
Hackathon that weekend - come sponsor and hack for ultimate glory!

we are still taking side event listings: every night from nov 17-23 there's hundreds of AI Engineers we can send to you! just put up a luma or partiful and drop it here. https://t.co/BoMjesniNl the big one is @iporollo doing the epic AIE x @cerebral_valley Hackathon that weekend - come sponsor and hack for ultimate glory!

achieve ambition with intentionality, intensity, & integrity - @dxtipshq - @sveltesociety - @aidotengineer - @latentspacepod - @cognition + @smol_ai

avatar for swyx
swyx
Mon Nov 03 19:03:31
We have the best new media team @a16z! stoked to see the finished product soon 🫶🏼

We have the best new media team @a16z! stoked to see the finished product soon 🫶🏼

AI Apps investing @a16z | Investor in @elevenlabsio, @function, @cluely, @trymirage, @slingshotai_inc, @partiful & more | Growth @Snap & CFO @livebungalow

avatar for Bryan Kim
Bryan Kim
Mon Nov 03 19:03:01
We have the best new media team @a16z! stoked to see the finished product soon 🫶🏼

We have the best new media team @a16z! stoked to see the finished product soon 🫶🏼

AI Apps investing @a16z | Investor in @elevenlabsio, @function, @cluely, @trymirage, @slingshotai_inc, @partiful & more | Growth @Snap & CFO @livebungalow

avatar for Bryan Kim
Bryan Kim
Mon Nov 03 19:03:01
Poll: If you give an LLM a terminal tool, would you want it to do which on an intermediate math problem it probably can solve on its own but might not:

Poll: If you give an LLM a terminal tool, would you want it to do which on an intermediate math problem it probably can solve on its own but might not:

Cofounder and Head of Post Training @NousResearch, prev @StabilityAI Github: https://t.co/LZwHTUFwPq HuggingFace: https://t.co/sN2FFU8PVE

avatar for Teknium (e/λ)
Teknium (e/λ)
Mon Nov 03 19:02:54
this weekend, i vibe coded a prototype email client that i want...

- for each email, looks up contacts details + notes in attio, then org details, looks at lists to determine if portfolio, LP, etc.
- summarizes this context and creates content relevance score
- uses this content to categorize and tag emails
- uses this to decide if it's a quick reply, research and reply, or take action and reply
- if research/actions, suggests steps
- drafts email based on all of this
- i can create custom rules/prompts based on categories, which get auto updated when i edit a suggested action item or draft

far from ready, but cool to see it starting to work with real data

this weekend, i vibe coded a prototype email client that i want... - for each email, looks up contacts details + notes in attio, then org details, looks at lists to determine if portfolio, LP, etc. - summarizes this context and creates content relevance score - uses this content to categorize and tag emails - uses this to decide if it's a quick reply, research and reply, or take action and reply - if research/actions, suggests steps - drafts email based on all of this - i can create custom rules/prompts based on categories, which get auto updated when i edit a suggested action item or draft far from ready, but cool to see it starting to work with real data

VC by day @untappedvc, builder by night: @babyagi_, @pippinlovesyou @pixelbeastsnft. Build-in-public log: https://t.co/UdHHGbZba5

avatar for Yohei
Yohei
Mon Nov 03 19:02:05
  • Previous
  • 1
  • More pages
  • 1147
  • 1148
  • 1149
  • More pages
  • 2111
  • Next