LogoThread Easy
  • Explorer
  • Composer un thread
LogoThread Easy

Votre partenaire tout-en-un pour les threads Twitter

© 2025 Thread Easy All Rights Reserved.

Explorer

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

If 'cognitive core' knows only a minimum of facts, a lot of information must be included in the context, and it would much more efficient to use cartridges with pre-computed thoughts than to scan through raw documents on each query.

If 'cognitive core' knows only a minimum of facts, a lot of information must be included in the context, and it would much more efficient to use cartridges with pre-computed thoughts than to scan through raw documents on each query.

Moreover, previously "prefix tuning" paper demonstrated that KV-prefix can also have same effect as fine-tuning, so cartridge can also include skills, textual style, etc. And unlike LoRA adapters they are composable.

avatar for Alex Mizrahi
Alex Mizrahi
Thu Oct 30 17:37:50
Moreover, previously "prefix tuning" paper demonstrated that KV-prefix can also have same effect as fine-tuning, so cartridge can also include skills, textual style, etc. And unlike LoRA adapters they are composable.

Moreover, previously "prefix tuning" paper demonstrated that KV-prefix can also have same effect as fine-tuning, so cartridge can also include skills, textual style, etc. And unlike LoRA adapters they are composable.

Blockchain tech guy, made world's first token wallet and decentralized exchange protocol in 2012; CTO ChromaWay / Chromia

avatar for Alex Mizrahi
Alex Mizrahi
Thu Oct 30 17:37:50
I just realized that if we live in a world where Karpathy's "cognitive core" thesis (https://t.co/1RtvrEGIXd) is true in a maximal form (i.e. that cognitive core LLM is actually superior - that's how he described it in the interview with Dwarkesh P.),

I just realized that if we live in a world where Karpathy's "cognitive core" thesis (https://t.co/1RtvrEGIXd) is true in a maximal form (i.e. that cognitive core LLM is actually superior - that's how he described it in the interview with Dwarkesh P.),

then value capture in the AI market might look fundamentally different. That is, there might be a lot less value in huge "frontier" models which only the largest companies can develop. On the other hand, high-performance usage of "cognitive core" would require access

avatar for Alex Mizrahi
Alex Mizrahi
Thu Oct 30 17:37:46
In retrospect it's hilarious that I will now never need to learn Rust

In retrospect it's hilarious that I will now never need to learn Rust

Opening portals to handheld VR at https://t.co/A2JMItorCV. Problems soluble, potential to improve invariant.

avatar for gfodor.id
gfodor.id
Thu Oct 30 17:37:34
Thread

Thread

We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1

avatar for Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Thu Oct 30 17:36:26
time to ship

time to ship

🧑‍💻 https://t.co/Y30jsaI4oH $20K/m ⚡️ https://t.co/vatLDmiHKe $12K/m 📈 https://t.co/3EDxln5U2Q $6K/m 🍜 https://t.co/r07EpGTwyA $.5K/m 🧾 https://t.co/7olaOzVGML $0/m 🛡️ https://t.co/LFgSlrZaip $0/m 🧬 https://t.co/SfrVXVtU38 $0/m +18 https://t.co/4zCWHGJWRq

avatar for Marc Lou
Marc Lou
Thu Oct 30 17:34:14
  • Previous
  • 1
  • More pages
  • 1709
  • 1710
  • 1711
  • More pages
  • 2117
  • Next