LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

historically, the naive gave us:

boats - "we can sail to new lands"
planes - "we can fly the skies"
rockets - "we can reach the moon"
computers - "we can build calculation machines"

statements that were once taken as delusional. meanwhile, the cynical got us nothing but endless arguments about what is not possible, only to be proven wrong every single time

I'd rather be delusional than a drag

even if all my takes are dumb and my predictions end up being completely wrong, at least I'm doing meaningful work towards the world I want to live in

HVM4 will soon be here, it will be the most advanced compiler for one of the most interesting models of computation humans devised, and people all around the world will be able to use it to push their own work forward

what are skeptics adding to society?

historically, the naive gave us: boats - "we can sail to new lands" planes - "we can fly the skies" rockets - "we can reach the moon" computers - "we can build calculation machines" statements that were once taken as delusional. meanwhile, the cynical got us nothing but endless arguments about what is not possible, only to be proven wrong every single time I'd rather be delusional than a drag even if all my takes are dumb and my predictions end up being completely wrong, at least I'm doing meaningful work towards the world I want to live in HVM4 will soon be here, it will be the most advanced compiler for one of the most interesting models of computation humans devised, and people all around the world will be able to use it to push their own work forward what are skeptics adding to society?

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Mon Nov 10 22:52:37
é não foi dessa vez guys

provavelmente o estagiário não repassou a nota

vai ter que ser pelo caminho mais difícil mesmo

256/65536 já tá garantido

é não foi dessa vez guys provavelmente o estagiário não repassou a nota vai ter que ser pelo caminho mais difícil mesmo 256/65536 já tá garantido

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Mon Nov 10 19:12:37
just to save face, the main take here is we'll have AGI ~6 months after a model can reproduce a correct, working GPT-2 clone in plain C, with any tweaks you ask. we all agree LLMs will eventually do that, right? ofc, if they take 10 years to do so, then we're that from AGI

just to save face, the main take here is we'll have AGI ~6 months after a model can reproduce a correct, working GPT-2 clone in plain C, with any tweaks you ask. we all agree LLMs will eventually do that, right? ofc, if they take 10 years to do so, then we're that from AGI

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Mon Nov 10 13:28:00
can a big lab plss train a massive model exclusively on synthetic data, including only and only correct raw type theory statements and proofs

no English

NO LEAN

no metavars

no tactics

just core, complete terms in raw CoIC or similar

then publish it pls

thank you 🫶😍

can a big lab plss train a massive model exclusively on synthetic data, including only and only correct raw type theory statements and proofs no English NO LEAN no metavars no tactics just core, complete terms in raw CoIC or similar then publish it pls thank you 🫶😍

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Mon Nov 10 12:53:22
can a big lab plss train a massive model exclusively on synthetic data, including only and only correct raw type theory statements and proofs

no English

NO LEAN

no metavars

no tactics

just core, complete terms in raw CoIC or similar

then publish it pls

thank you 🫶😍

can a big lab plss train a massive model exclusively on synthetic data, including only and only correct raw type theory statements and proofs no English NO LEAN no metavars no tactics just core, complete terms in raw CoIC or similar then publish it pls thank you 🫶😍

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Mon Nov 10 12:53:22
I'm afraid these who believe AGI is very far away must probably be thinking in terms of

"will LLMs scale to AGI?"

rather than

"is humanity closer to AGI?"

Like they completely forget to account for upcoming breakthroughs, and they certainly don't think about how existing tools accelerate the pace of these breakthroughs. They see GPT-3, GPT-4, GPT-5, and picture in their heads: "will GPT-7 be AGI?". Then they realize that, no, it wouldn't, obviously. And they then project AGI as being many years away.

If I'm not mistaken, it took Karpathy about 1 year to implement NanoGPT. Now, take a moment to imagine a model capable of passing this prompt:

"write a working clone of GPT-2 in plain C, except with..."

As soon as such a thing exists and is broadly available, LLMs will be nearing their end. We'll instantly enter a transition era between this thing and the next thing, because labs all around the world will be doing ultra fast research and experimentation, trying new systems, reasoning about the very nature of intelligence. And the result of that will be a truly general intelligence system.

I honestly think this will catch many off guard, in particular these working on major AI labs, because they're getting comfortable with the LLM curve. They think the LLM curve is THE intelligence curve. But it isn't.

The intelligence exponential was driven by step breakthroughs. It started with life, passed through bacteria, fish, dinosaurs, humans, fire, agriculture, writing, mathematics, the printing press, steam engines, electronics, computers, the internet, and now LLMs. Each thing accelerated progress towards the next thing. LLMs aren't the last thing, but they're the thing before the last.

When I say "AGI around end of 2026" I'm not talking about GPT-7. I'm talking about XYZ-1, which will be implemented by a team with access to GPT-6...

I'm afraid these who believe AGI is very far away must probably be thinking in terms of "will LLMs scale to AGI?" rather than "is humanity closer to AGI?" Like they completely forget to account for upcoming breakthroughs, and they certainly don't think about how existing tools accelerate the pace of these breakthroughs. They see GPT-3, GPT-4, GPT-5, and picture in their heads: "will GPT-7 be AGI?". Then they realize that, no, it wouldn't, obviously. And they then project AGI as being many years away. If I'm not mistaken, it took Karpathy about 1 year to implement NanoGPT. Now, take a moment to imagine a model capable of passing this prompt: "write a working clone of GPT-2 in plain C, except with..." As soon as such a thing exists and is broadly available, LLMs will be nearing their end. We'll instantly enter a transition era between this thing and the next thing, because labs all around the world will be doing ultra fast research and experimentation, trying new systems, reasoning about the very nature of intelligence. And the result of that will be a truly general intelligence system. I honestly think this will catch many off guard, in particular these working on major AI labs, because they're getting comfortable with the LLM curve. They think the LLM curve is THE intelligence curve. But it isn't. The intelligence exponential was driven by step breakthroughs. It started with life, passed through bacteria, fish, dinosaurs, humans, fire, agriculture, writing, mathematics, the printing press, steam engines, electronics, computers, the internet, and now LLMs. Each thing accelerated progress towards the next thing. LLMs aren't the last thing, but they're the thing before the last. When I say "AGI around end of 2026" I'm not talking about GPT-7. I'm talking about XYZ-1, which will be implemented by a team with access to GPT-6...

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Mon Nov 10 11:59:26
  • Previous
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • Next