LogoThread Easy
  • Explorer
  • Composer un thread
LogoThread Easy

Votre partenaire tout-en-un pour les threads Twitter

© 2025 Thread Easy All Rights Reserved.

Explorer

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing.
Its features are slowly crystalizing:

- Natively multimodal text/vision/audio at both input and output.
- Matryoshka-style architecture allowing a dial of capability up and down at test time.
- Reasoning, also with a dial. (system 2)
- Aggressively tool-using.
- On-device finetuning LoRA slots for test-time training, personalization and customization.
- Delegates and double checks just the right parts with the oracles in the cloud if internet is available.

It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it.

What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal text/vision/audio at both input and output. - Matryoshka-style architecture allowing a dial of capability up and down at test time. - Reasoning, also with a dial. (system 2) - Aggressively tool-using. - On-device finetuning LoRA slots for test-time training, personalization and customization. - Delegates and double checks just the right parts with the oracles in the cloud if internet is available. It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it. What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets.

avatar for Andrej Karpathy
Andrej Karpathy
Fri Jun 27 15:52:02
The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing.
Its features are slowly crystalizing:

- Natively multimodal text/vision/audio at both input and output.
- Matryoshka-style architecture allowing a dial of capability up and down at test time.
- Reasoning, also with a dial. (system 2)
- Aggressively tool-using.
- On-device finetuning LoRA slots for test-time training, personalization and customization.
- Delegates and double checks just the right parts with the oracles in the cloud if internet is available.

It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it.

What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal text/vision/audio at both input and output. - Matryoshka-style architecture allowing a dial of capability up and down at test time. - Reasoning, also with a dial. (system 2) - Aggressively tool-using. - On-device finetuning LoRA slots for test-time training, personalization and customization. - Delegates and double checks just the right parts with the oracles in the cloud if internet is available. It doesn't know that William the Conqueror's reign ended in September 9 1087, but it vaguely recognizes the name and can look up the date. It can't recite the SHA-256 of empty string as e3b0c442..., but it can calculate it quickly should you really want it. What LLM personal computing lacks in broad world knowledge and top tier problem-solving capability it will make up in super low interaction latency (especially as multimodal matures), direct / private access to data and state, offline continuity, sovereignty ("not your weights not your brain"). i.e. many of the same reasons we like, use and buy personal computers instead of having thin clients access a cloud via remote desktop or so.

Do people *feel* how much work there is still to do. Like wow.

avatar for Andrej Karpathy
Andrej Karpathy
Fri Jun 27 15:52:02
Mikayla Raines, founder of "Save a Fox Rescue" just took her own life.

2 months ago she was seen on r/SaveAfoxSnark attempting to defend herself from her detractors.

Here's how the subreddit's creator responded when she showed up:

It's time to address "Snark" subreddits. 🧵

Mikayla Raines, founder of "Save a Fox Rescue" just took her own life. 2 months ago she was seen on r/SaveAfoxSnark attempting to defend herself from her detractors. Here's how the subreddit's creator responded when she showed up: It's time to address "Snark" subreddits. 🧵

Back in April of 2024, reddit user u/Pale-Explanation-709 created r/SaveAfoxSnark after rumors of a bobcat biting a volunteer surfaced.

avatar for Reddit Lies
Reddit Lies
Tue Jun 24 15:05:50
🇹🇷 Shrnutí kauzy tureckých trolejbusů 🚎

Jsem rád, že vás zaujala kauza tureckých trolejbusů, které nakoupil @ZdenekHrib skrze @DPPOficialni pro Prahu.

Tady je krátké shrnutí, o co vlastně jde. Přeji příjemné počtení před startem víkendu 🧵

🇹🇷 Shrnutí kauzy tureckých trolejbusů 🚎 Jsem rád, že vás zaujala kauza tureckých trolejbusů, které nakoupil @ZdenekHrib skrze @DPPOficialni pro Prahu. Tady je krátké shrnutí, o co vlastně jde. Přeji příjemné počtení před startem víkendu 🧵

Vedení Prahy se ocitlo pod palbou kritiky kvůli nákupu sedmdesáti trolejbusů od tureckého výrobce Bozankaya. Vozy tohoto výrobce nejsou homologované pro evropský městský provoz a v minulosti čelily problémům například v Bělehradě, Teplicích, Pardubicích nebo Českých Budějovicích, kde byly z výběrových řízení vyloučeny pro zásadní technické nebo administrativní nedostatky.

avatar for Pan úředník
Pan úředník
Fri Jun 20 17:12:43
Grok 3 is insanely powerful.

But most people don’t know how to use it.

Here are 10 insanely powerful prompts to automate deep research: 👇

Grok 3 is insanely powerful. But most people don’t know how to use it. Here are 10 insanely powerful prompts to automate deep research: 👇

1/ Research Strategist Prompt: "Act as a Research Strategist and provide a step-by-step guide for using frameworks like SWOT or PESTLE in organizing research for {topic}. Include time-saving tips and tools."

avatar for Alex Hughes
Alex Hughes
Wed Jun 18 09:35:32
🚨Google search just changed forever.

They made it more agentic and hyper-personalized.

Here's AI Mode by Google as seen on Google I/O with Gemini 2.5 at it's core.🧵

🚨Google search just changed forever. They made it more agentic and hyper-personalized. Here's AI Mode by Google as seen on Google I/O with Gemini 2.5 at it's core.🧵

Here's how the AI-mode works under the hood. - Calls custom version of Gemini - Breaks topics into smaller questions and search across entire web including local data from maps - Checks it's search to provide 100% relevant information

avatar for Alif Hossain
Alif Hossain
Sun Jun 15 16:07:43
  • Previous
  • 1
  • More pages
  • 2071
  • 2072
  • 2073
  • More pages
  • 2111
  • Next