LogoThread Easy
  • Explorer
  • Composer un thread
LogoThread Easy

Votre partenaire tout-en-un pour les threads Twitter

© 2025 Thread Easy All Rights Reserved.

Explorer

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Wow, ChatGPT is already showing ads?

I was just talking with it about Elon on Nikhil’s podcast when out of nowhere it popped up an ad saying, “Find a fitness class, Connect Peloton.” 🤯

Wild. At least match the ad to the topic next time!

Wow, ChatGPT is already showing ads? I was just talking with it about Elon on Nikhil’s podcast when out of nowhere it popped up an ad saying, “Find a fitness class, Connect Peloton.” 🤯 Wild. At least match the ad to the topic next time!

I’m on the $200/month ChatGPT Pro plan btw.

avatar for Yuchen Jin
Yuchen Jin
Mon Dec 01 05:01:12
btw i've been calling out these components of the LLM OS thesis for the last ~year on @latentspacepod

red boxes have REALLY taken off (differentially; google my Impossible Triangles theory) this yr. basically the only horizontal devtools with pmf outside of the neoclouds and koding agents.

green i think is comparatively underrated still. watch @jerryjliu0's 2025 AIE talk.

btw i've been calling out these components of the LLM OS thesis for the last ~year on @latentspacepod red boxes have REALLY taken off (differentially; google my Impossible Triangles theory) this yr. basically the only horizontal devtools with pmf outside of the neoclouds and koding agents. green i think is comparatively underrated still. watch @jerryjliu0's 2025 AIE talk.

achieve ambition with intentionality, intensity, & integrity - @dxtipshq - @sveltesociety - @aidotengineer - @latentspacepod - @cognition + @smol_ai

avatar for swyx
swyx
Mon Dec 01 04:59:21
this meme was considered a joke of dubious historicity btw
it seems that the naive narrative “European Huns are just descendants of the Xiongnu that the Han had kicked out” was straight up correct
and there were other Asian wash-ups like them!

this meme was considered a joke of dubious historicity btw it seems that the naive narrative “European Huns are just descendants of the Xiongnu that the Han had kicked out” was straight up correct and there were other Asian wash-ups like them!

We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1

avatar for Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Mon Dec 01 04:55:20
RT @dejavucoder: prompt caching is the most bang for buck optimisation you can do for your LLM based workflows and agents. in this post, i…

RT @dejavucoder: prompt caching is the most bang for buck optimisation you can do for your LLM based workflows and agents. in this post, i…

making models learn • eXperiments lab • memes and training lores

avatar for tokenbender
tokenbender
Mon Dec 01 04:49:54
RT @jukan05: Rumors that Japan has halted semiconductor material supplies to China have been spreading, and as a result, the share price of…

RT @jukan05: Rumors that Japan has halted semiconductor material supplies to China have been spreading, and as a result, the share price of…

Root node of the web of threads: https://t.co/ifH80GcLpo

avatar for James Torre
James Torre
Mon Dec 01 04:44:11
Solving from memory vs solving from scratch--Or the Futility of applying "Complexity Lens" to LLMs #SundayHarangue #NeurIPS2025 Edition

I continue to be puzzled by the insistence on viewing LLM task performance in terms of the computational complexity of the underlying task (see https://t.co/4X1yQFY3KH ).

This despite the plenty of anecdotal  evidence already showing that the Jagged Intelligence of LLMs has no direct connection to task complexity. LLMs can be competitive on International Mathematical Olympiad problems, while still falling for "Amazon sent me a left shoe instead of a right shoe, and vice versa"  juvenile gotcha's (y'all should follow @conitzer for a seemingly never ending list of these gotchas for SOTA LLMs!). 

Computational complexity is often in terms of solving a task algorithmically from scratch. Everything in pre-training, post-training and inference in LLMs instead screams solving from memory. 

Of course, this doesn't mean that LLMs are just directly retrieving the solution to an individual task prompt from a large library of previous solutions. It is that they are trying to trying to address the task prompt not by solving it algorithmically from scratch, but by some trial-and-error process of composing knowledge gleaned from pre- and post-training on human knowledge. 

From this perspective, the  "intermediate tokens" output by the reasoning models are to be interpreted not  as traces of some from-scratch algorithm, but perhaps as a footprint of the model's attempts to compose the prior knowledge in its memory to address the current task prompt. 

(As I argue else where, https://t.co/qE0vAwB636, pre-training can be seen as ingesting humanity's declarative knowledge, while post-training can be seen incrementally ingesting humanity's procedural knowledge--in terms of ever longer unrollings of the procedures). 

The cost/accuracy of such compositional trial-and-error problem solving is based not on the from-scratch computational complexity of the current task prompt, but rather how easy it is to assemble a solution for it from the current memory. This is why LLMs suffer low accuracy on tasks that are far from the pre- and post- training distribution. See https://t.co/RL9ZEOKbpQ.

A tell-tale sign of memory-based problem solving is that the model might have both low accuracy as well as longer intermediate tokens ("computation") when the problem is out of the training distribution--even if it is in fact trivially solvable from scratch. This is the message of our "Peformative Thinking" paper--https://t.co/itCXNctKZ1 -- to be presented at the #NeurIPS2025 Efficient Reasoning workshop.

Solving from memory vs solving from scratch--Or the Futility of applying "Complexity Lens" to LLMs #SundayHarangue #NeurIPS2025 Edition I continue to be puzzled by the insistence on viewing LLM task performance in terms of the computational complexity of the underlying task (see https://t.co/4X1yQFY3KH ). This despite the plenty of anecdotal evidence already showing that the Jagged Intelligence of LLMs has no direct connection to task complexity. LLMs can be competitive on International Mathematical Olympiad problems, while still falling for "Amazon sent me a left shoe instead of a right shoe, and vice versa" juvenile gotcha's (y'all should follow @conitzer for a seemingly never ending list of these gotchas for SOTA LLMs!). Computational complexity is often in terms of solving a task algorithmically from scratch. Everything in pre-training, post-training and inference in LLMs instead screams solving from memory. Of course, this doesn't mean that LLMs are just directly retrieving the solution to an individual task prompt from a large library of previous solutions. It is that they are trying to trying to address the task prompt not by solving it algorithmically from scratch, but by some trial-and-error process of composing knowledge gleaned from pre- and post-training on human knowledge. From this perspective, the "intermediate tokens" output by the reasoning models are to be interpreted not as traces of some from-scratch algorithm, but perhaps as a footprint of the model's attempts to compose the prior knowledge in its memory to address the current task prompt. (As I argue else where, https://t.co/qE0vAwB636, pre-training can be seen as ingesting humanity's declarative knowledge, while post-training can be seen incrementally ingesting humanity's procedural knowledge--in terms of ever longer unrollings of the procedures). The cost/accuracy of such compositional trial-and-error problem solving is based not on the from-scratch computational complexity of the current task prompt, but rather how easy it is to assemble a solution for it from the current memory. This is why LLMs suffer low accuracy on tasks that are far from the pre- and post- training distribution. See https://t.co/RL9ZEOKbpQ. A tell-tale sign of memory-based problem solving is that the model might have both low accuracy as well as longer intermediate tokens ("computation") when the problem is out of the training distribution--even if it is in fact trivially solvable from scratch. This is the message of our "Peformative Thinking" paper--https://t.co/itCXNctKZ1 -- to be presented at the #NeurIPS2025 Efficient Reasoning workshop.

AI researcher & teacher @SCAI_ASU. Former President of @RealAAAI; Chair of @AAAS Sec T. Here to tweach #AI. YouTube Ch: https://t.co/4beUPOmf6y Bsky: rao2z

avatar for Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
Mon Dec 01 04:44:08
  • Previous
  • 1
  • More pages
  • 1926
  • 1927
  • 1928
  • More pages
  • 5634
  • Next