LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Good writeup. The West uses plenty of methods and money to subsidize supply though. They just Have Worse Outcomes.
Imo the main difference is that the CPC is teleological, while the USG is homeostatic. You need to cultivate tech trees, not just pass politically viable budgets.

Good writeup. The West uses plenty of methods and money to subsidize supply though. They just Have Worse Outcomes. Imo the main difference is that the CPC is teleological, while the USG is homeostatic. You need to cultivate tech trees, not just pass politically viable budgets.

re the original post. Chynese are too poor and low-productivity to compete with Mississippi. They just ain't got any money to spend, so it stands to reason that GDP is so low. Kimi with search gives a breakdown of annual spending of a typical student in Chongqing (urban pop. 22M).

avatar for Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Mon Nov 10 12:24:24
I'm afraid these who believe AGI is very far away must probably be thinking in terms of

"will LLMs scale to AGI?"

rather than

"is humanity closer to AGI?"

Like they completely forget to account for upcoming breakthroughs, and they certainly don't think about how existing tools accelerate the pace of these breakthroughs. They see GPT-3, GPT-4, GPT-5, and picture in their heads: "will GPT-7 be AGI?". Then they realize that, no, it wouldn't, obviously. And they then project AGI as being many years away.

If I'm not mistaken, it took Karpathy about 1 year to implement NanoGPT. Now, take a moment to imagine a model capable of passing this prompt:

"write a working clone of GPT-2 in plain C, except with..."

As soon as such a thing exists and is broadly available, LLMs will be nearing their end. We'll instantly enter a transition era between this thing and the next thing, because labs all around the world will be doing ultra fast research and experimentation, trying new systems, reasoning about the very nature of intelligence. And the result of that will be a truly general intelligence system.

I honestly think this will catch many off guard, in particular these working on major AI labs, because they're getting comfortable with the LLM curve. They think the LLM curve is THE intelligence curve. But it isn't.

The intelligence exponential was driven by step breakthroughs. It started with life, passed through bacteria, fish, dinosaurs, humans, fire, agriculture, writing, mathematics, the printing press, steam engines, electronics, computers, the internet, and now LLMs. Each thing accelerated progress towards the next thing. LLMs aren't the last thing, but they're the thing before the last.

When I say "AGI around end of 2026" I'm not talking about GPT-7. I'm talking about XYZ-1, which will be implemented by a team with access to GPT-6...

I'm afraid these who believe AGI is very far away must probably be thinking in terms of "will LLMs scale to AGI?" rather than "is humanity closer to AGI?" Like they completely forget to account for upcoming breakthroughs, and they certainly don't think about how existing tools accelerate the pace of these breakthroughs. They see GPT-3, GPT-4, GPT-5, and picture in their heads: "will GPT-7 be AGI?". Then they realize that, no, it wouldn't, obviously. And they then project AGI as being many years away. If I'm not mistaken, it took Karpathy about 1 year to implement NanoGPT. Now, take a moment to imagine a model capable of passing this prompt: "write a working clone of GPT-2 in plain C, except with..." As soon as such a thing exists and is broadly available, LLMs will be nearing their end. We'll instantly enter a transition era between this thing and the next thing, because labs all around the world will be doing ultra fast research and experimentation, trying new systems, reasoning about the very nature of intelligence. And the result of that will be a truly general intelligence system. I honestly think this will catch many off guard, in particular these working on major AI labs, because they're getting comfortable with the LLM curve. They think the LLM curve is THE intelligence curve. But it isn't. The intelligence exponential was driven by step breakthroughs. It started with life, passed through bacteria, fish, dinosaurs, humans, fire, agriculture, writing, mathematics, the printing press, steam engines, electronics, computers, the internet, and now LLMs. Each thing accelerated progress towards the next thing. LLMs aren't the last thing, but they're the thing before the last. When I say "AGI around end of 2026" I'm not talking about GPT-7. I'm talking about XYZ-1, which will be implemented by a team with access to GPT-6...

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Mon Nov 10 11:59:26
RT @cormachayden_: Your quality of life shifts the moment you realize:

creating > consuming
building > using
acting > procrastination
inve…

RT @cormachayden_: Your quality of life shifts the moment you realize: creating > consuming building > using acting > procrastination inve…

Photographer & software engineer into publishing. Loves building w/ Nodejs, React, Ruby/Rails, Python - making shipping fun! DM for collabs. ❤️ @JiwonKwak6

avatar for Ronald
Ronald
Mon Nov 10 11:58:26
Best investment of 20 min of your time. Please read this. @yishan  has banger after bangers in this one:

" American elite thinking tends to focus on "virtualized" things (like prices) while Chinese elites think more about "real atoms" (actual goods).  Hence, viewing the prices of things as being too high leads to favoring virtualized solutions like helping with the price (give more virtualized money to bring down the virtualized price). Thinking about the amount of physical goods leads to thinking about how to make more of the physical good (build more factories)."

Even if you may all that, he writes it with such great flourish that you must read it @teortaxesTex @zephyr_z9

Best investment of 20 min of your time. Please read this. @yishan has banger after bangers in this one: " American elite thinking tends to focus on "virtualized" things (like prices) while Chinese elites think more about "real atoms" (actual goods). Hence, viewing the prices of things as being too high leads to favoring virtualized solutions like helping with the price (give more virtualized money to bring down the virtualized price). Thinking about the amount of physical goods leads to thinking about how to make more of the physical good (build more factories)." Even if you may all that, he writes it with such great flourish that you must read it @teortaxesTex @zephyr_z9

AI @amazon. All views personal!

avatar for GDP
GDP
Mon Nov 10 11:54:12
How to find dev jobs/clients?

One of the ways is to showcase what you can on GitHub, based on REAL projects.

So, Asfia took my advice and it seems to be working!

How to find dev jobs/clients? One of the ways is to showcase what you can on GitHub, based on REAL projects. So, Asfia took my advice and it seems to be working!

~20 yrs in web-dev, now mostly Laravel. My Laravel courses: https://t.co/HRUAJdMRZL My Youtube channel: https://t.co/qPQAkaov2F

avatar for Povilas Korop | Laravel Courses Creator & Youtuber
Povilas Korop | Laravel Courses Creator & Youtuber
Mon Nov 10 11:53:01
just received a job submission on RanchWork․com that says:

"We supply guns and ammo for varmint hunting"

just received a job submission on RanchWork․com that says: "We supply guns and ammo for varmint hunting"

they kept laying me off so I began building 🚜 🌱 https://t.co/wfrYC5S7wn 🧅 📦 https://t.co/JtMqAWilhs ecomm 🐂 🛠️ https://t.co/E8U0DUsKzT jobs 🟥 🟦 hottytoddy

avatar for Peter Askew
Peter Askew
Mon Nov 10 11:52:54
  • Previous
  • 1
  • More pages
  • 277
  • 278
  • 279
  • More pages
  • 2127
  • Next