LogoThread Easy
  • Explorer
  • Composer un thread
LogoThread Easy

Votre partenaire tout-en-un pour les threads Twitter

© 2025 Thread Easy All Rights Reserved.

Explorer

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Forging the Unknown Path: The Grind Without a Timeline and the Birth of the Emotionless Grinding Machine

When pursuing your life’s work, you’ll face setbacks that might lead you to lose everything you’ve work for and have to restart.

When someone loses it all, the road back is lonely. You make sacrifices no one sees. Setback after setback builds on itself. There are days when it feels hopeless and your life reads like a cautionary tale. Anxiety rises. Depression creeps in. You replay every mistake. You imagine alternate versions of yourself, versions that reflect what you want most.

But if you keep moving and refuse to become a victim, you will eventually find your way back.

Forging the Unknown Path: The Grind Without a Timeline and the Birth of the Emotionless Grinding Machine When pursuing your life’s work, you’ll face setbacks that might lead you to lose everything you’ve work for and have to restart. When someone loses it all, the road back is lonely. You make sacrifices no one sees. Setback after setback builds on itself. There are days when it feels hopeless and your life reads like a cautionary tale. Anxiety rises. Depression creeps in. You replay every mistake. You imagine alternate versions of yourself, versions that reflect what you want most. But if you keep moving and refuse to become a victim, you will eventually find your way back.

Investor @a16z | God chose the foolish things of the world to shame the wise - 1 Cor. 1:27 | 🇸🇻

avatar for Gabriel Vasquez
Gabriel Vasquez
Wed Dec 10 17:25:51
> You’ll implement ColBERT to understand multi-vector search [and] apply ColPali for patch-level image retrieval.

So happy to see the great folks at @DeepLearningAI @AndrewYNg host a course on late interaction (ColBERT, ColPali et al) after their short course on DSPy :D

> You’ll implement ColBERT to understand multi-vector search [and] apply ColPali for patch-level image retrieval. So happy to see the great folks at @DeepLearningAI @AndrewYNg host a course on late interaction (ColBERT, ColPali et al) after their short course on DSPy :D

h/t @ClaudeFeldges for sharing the news with me

avatar for Omar Khattab
Omar Khattab
Wed Dec 10 17:22:59
1. Create high-quality content
2. Increase weight (e.g., by submitting backlinks)

Time will tell you the answer.

1. Create high-quality content 2. Increase weight (e.g., by submitting backlinks) Time will tell you the answer.

✦ Indie Hacker / AI Maker / Full Stacker ✦ Founder of https://t.co/HDnzUGieag(DR 75) & https://t.co/t6DoP7ODNe & https://t.co/YuOLvgIStF & https://t.co/ZvHVC3guiZ

avatar for Justin3go
Justin3go
Wed Dec 10 17:21:41
I asked @echen why Claude writes (and codes) so much better than other models. His answer: higher-quality training data.

"Most people don't understand what quality even means in this space. They think you could just throw bodies at a problem and get good data, and that's completely wrong.

Let me give you an example.

Imagine you wanted to train a model to write an eight-line poem about the moon. What makes it a good poem? 

If you don't think deeply about quality, you'll be like, is this a poem? Does it contain eight lines? Does it contain the word moon? You check all these boxes? So then yeah, sure, you say it's a great poem.

But that's completely different from what we want. We are looking for Nobel Prize-winning poetry. Is this poetry unique? Is it full of subtle imagery? Does it surprise you, and tug at your heart? Does it teach you something about the nature of moonlight? Does it play through emotions, and does it make you think?

That's what we are thinking about when we think about a high-quality poem."

I asked @echen why Claude writes (and codes) so much better than other models. His answer: higher-quality training data. "Most people don't understand what quality even means in this space. They think you could just throw bodies at a problem and get good data, and that's completely wrong. Let me give you an example. Imagine you wanted to train a model to write an eight-line poem about the moon. What makes it a good poem? If you don't think deeply about quality, you'll be like, is this a poem? Does it contain eight lines? Does it contain the word moon? You check all these boxes? So then yeah, sure, you say it's a great poem. But that's completely different from what we want. We are looking for Nobel Prize-winning poetry. Is this poetry unique? Is it full of subtle imagery? Does it surprise you, and tug at your heart? Does it teach you something about the nature of moonlight? Does it play through emotions, and does it make you think? That's what we are thinking about when we think about a high-quality poem."

Deeply researched product, growth, and career advice

avatar for Lenny Rachitsky
Lenny Rachitsky
Wed Dec 10 17:20:53
GenZ is the social media-native generation. Raised on influencers, followers, and virality

What will the AI-native generation be like?

Feels like it might end up the most empowered, most educated and capable generation of people so far

GenZ is the social media-native generation. Raised on influencers, followers, and virality What will the AI-native generation be like? Feels like it might end up the most empowered, most educated and capable generation of people so far

🇺🇸 a16z speedrun

avatar for andrew chen
andrew chen
Wed Dec 10 17:17:10
Quick new post: Auto-grading decade-old Hacker News discussions with hindsight

I took all the 930 frontpage Hacker News article+discussion of December 2015 and asked the GPT 5.1 Thinking API to do an in-hindsight analysis to identify the most/least prescient comments. This took ~3 hours to vibe code and ~1 hour and $60 to run. The idea was sparked by the HN article yesterday where Gemini 3 was asked to hallucinate the HN front page one decade forward.

More generally: 

1. in-hindsight analysis has always fascinated me as a way to train your forward prediction model so reading the results is really interesting and
2. it's worth contemplating what it looks like when LLM megaminds of the future can do this kind of work a lot cheaper, faster and better. Every single bit of information you contribute to the internet can (and probably will be) scrutinized in great detail if it is "free". Hence also my earlier tweet from a while back - "be good, future LLMs are watching".

Congrats to the top 10 accounts pcwalton, tptacek, paulmd, cstross, greglindahl, moxie, hannob, 0xcde4c3db, Manishearth, and johncolanduoni - GPT 5.1 Thinking found your comments to be the most insightful and prescient of all comments of HN in December of 2015.

Links:
- A lot more detail in my blog post https://t.co/7LpJEVgbyk
- GitHub repo of the project if you'd like to play https://t.co/WVQUbUzt2y
- The actual results pages for your reading pleasure

Quick new post: Auto-grading decade-old Hacker News discussions with hindsight I took all the 930 frontpage Hacker News article+discussion of December 2015 and asked the GPT 5.1 Thinking API to do an in-hindsight analysis to identify the most/least prescient comments. This took ~3 hours to vibe code and ~1 hour and $60 to run. The idea was sparked by the HN article yesterday where Gemini 3 was asked to hallucinate the HN front page one decade forward. More generally: 1. in-hindsight analysis has always fascinated me as a way to train your forward prediction model so reading the results is really interesting and 2. it's worth contemplating what it looks like when LLM megaminds of the future can do this kind of work a lot cheaper, faster and better. Every single bit of information you contribute to the internet can (and probably will be) scrutinized in great detail if it is "free". Hence also my earlier tweet from a while back - "be good, future LLMs are watching". Congrats to the top 10 accounts pcwalton, tptacek, paulmd, cstross, greglindahl, moxie, hannob, 0xcde4c3db, Manishearth, and johncolanduoni - GPT 5.1 Thinking found your comments to be the most insightful and prescient of all comments of HN in December of 2015. Links: - A lot more detail in my blog post https://t.co/7LpJEVgbyk - GitHub repo of the project if you'd like to play https://t.co/WVQUbUzt2y - The actual results pages for your reading pleasure

Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets.

avatar for Andrej Karpathy
Andrej Karpathy
Wed Dec 10 17:15:14
  • Previous
  • 1
  • More pages
  • 1081
  • 1082
  • 1083
  • More pages
  • 5634
  • Next