LogoThread Easy
  • Explorar
  • Componer hilo
LogoThread Easy

Tu compañero integral para hilos de Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

RT @heyanuja: navier stokes describes how fluids move
the millennium prize asks, can a fluid's energy concentrate into a single point in fi…

RT @heyanuja: navier stokes describes how fluids move the millennium prize asks, can a fluid's energy concentrate into a single point in fi…

Root node of the web of threads: https://t.co/ifH80GcLpo

avatar for James Torre
James Torre
Mon Dec 22 17:34:01
RT @heyanuja: I made a Goodreads for academic papers!
(..and blog posts, substacks, lesswrong, etc)

Paper Trails [https://t.co/dPzhlBgo8E]…

RT @heyanuja: I made a Goodreads for academic papers! (..and blog posts, substacks, lesswrong, etc) Paper Trails [https://t.co/dPzhlBgo8E]…

Root node of the web of threads: https://t.co/ifH80GcLpo

avatar for James Torre
James Torre
Mon Dec 22 17:33:22
My AI prediction for 2026 is that the amount of alpha in ignoring 90% of all AI chatter will only increase as the benchmark numbers continue to get further untethered from the hard reality of incorporating LLMs into real features for real customers who are doing real work.

My AI prediction for 2026 is that the amount of alpha in ignoring 90% of all AI chatter will only increase as the benchmark numbers continue to get further untethered from the hard reality of incorporating LLMs into real features for real customers who are doing real work.

Author. Coder. CTO. θηριομάχης. Building: https://t.co/otXT4Wy6WR. Writing: https://t.co/dBPBtyCIHw.

avatar for Jon Stokes
Jon Stokes
Mon Dec 22 17:33:04
As a builder in AI, there has just been so much alpha in ignoring AI hype & vagueposting & manic FOMO & threadbois, and treating LLMs as one tool in the toolbox & as yet another REST API that I can call & get a useful response from.

As a builder in AI, there has just been so much alpha in ignoring AI hype & vagueposting & manic FOMO & threadbois, and treating LLMs as one tool in the toolbox & as yet another REST API that I can call & get a useful response from.

My AI prediction for 2026 is that the amount of alpha in ignoring 90% of all AI chatter will only increase as the benchmark numbers continue to get further untethered from the hard reality of incorporating LLMs into real features for real customers who are doing real work.

avatar for Jon Stokes
Jon Stokes
Mon Dec 22 17:33:03
In the new year, Symbolic will Big News -- watch this space. At that time, I will take a little mini victory lap, & I'll discuss some of the approach that let us achieve what we'll announce. But to scoop myself, my own secret edge as a CTO is: I just don't believe the AI hype 🤷♂️

In the new year, Symbolic will Big News -- watch this space. At that time, I will take a little mini victory lap, & I'll discuss some of the approach that let us achieve what we'll announce. But to scoop myself, my own secret edge as a CTO is: I just don't believe the AI hype 🤷♂️

I don't believe the hype about "AI" repos with a bazillion github stars. I clone into them & realize there isn't much real work there, & anything that is interesting in there I can just lift & put into my own app w/out adding a dependency.

avatar for Jon Stokes
Jon Stokes
Mon Dec 22 17:32:59
RT @heyanuja: the local enstrophy function is  defined as 0 with a comment "placeholder value; properties axiomatized"
then the next lines…

RT @heyanuja: the local enstrophy function is defined as 0 with a comment "placeholder value; properties axiomatized" then the next lines…

Root node of the web of threads: https://t.co/ifH80GcLpo

avatar for James Torre
James Torre
Mon Dec 22 17:32:59
  • Previous
  • 1
  • More pages
  • 129
  • 130
  • 131
  • More pages
  • 5634
  • Next