LogoThread Easy
  • Explorer
  • Composer un thread
LogoThread Easy

Votre partenaire tout-en-un pour les threads Twitter

© 2025 Thread Easy All Rights Reserved.

Explorer

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Taste of what LLM-driven scientific discovery will be like: with 12 minutes of thinking, GPT-5 Pro suggested repurposing a known drug to treat an untreatable food allergy. Same exact result was found by an (at the time unpublished) peer-reviewed study. And models still improving.

Taste of what LLM-driven scientific discovery will be like: with 12 minutes of thinking, GPT-5 Pro suggested repurposing a known drug to treat an untreatable food allergy. Same exact result was found by an (at the time unpublished) peer-reviewed study. And models still improving.

President & Co-Founder @OpenAI

avatar for Greg Brockman
Greg Brockman
Sun Nov 02 18:52:59
RT @morganb: It’s real and it’s spectacular.

RT @morganb: It’s real and it’s spectacular.

🏗️ Love to build stuff (@runwayco, @sandboxvr, @postmates, @zynga) people love. 💸 Investor @amplitude_hq, @mercury, @owner, @elevenlabsio, @meetgamma ++

avatar for Siqi Chen
Siqi Chen
Sun Nov 02 18:51:58
RT @soleio: I’ve designed software, backed startups and helped founders. Now I’m writing fiction.

This piece is from a collection of fable…

RT @soleio: I’ve designed software, backed startups and helped founders. Now I’m writing fiction. This piece is from a collection of fable…

🏗️ Love to build stuff (@runwayco, @sandboxvr, @postmates, @zynga) people love. 💸 Investor @amplitude_hq, @mercury, @owner, @elevenlabsio, @meetgamma ++

avatar for Siqi Chen
Siqi Chen
Sun Nov 02 18:51:33
broader context here:

broader context here:

Asst professor @MIT EECS & CSAIL (@nlp_mit). Author of https://t.co/VgyLxl0oa1 and https://t.co/ZZaSzaRaZ7 (@DSPyOSS). Prev: CS PhD @StanfordNLP. Research @Databricks.

avatar for Omar Khattab
Omar Khattab
Sun Nov 02 18:50:47
though, as ever, once a capability is achieved with a system, it’s easy to distill it back into model weights:

though, as ever, once a capability is achieved with a system, it’s easy to distill it back into model weights:

broader context here:

avatar for Omar Khattab
Omar Khattab
Sun Nov 02 18:48:52
One reason I’m more bullish than ever on “LLMs” is that essentially *no one* in the field is a DNN forward pass maximalist or a self-supervision maximalist anymore.

It’s just not the same meaning as 2020-22 LLMs.

One reason I’m more bullish than ever on “LLMs” is that essentially *no one* in the field is a DNN forward pass maximalist or a self-supervision maximalist anymore. It’s just not the same meaning as 2020-22 LLMs.

though, as ever, once a capability is achieved with a system, it’s easy to distill it back into model weights:

avatar for Omar Khattab
Omar Khattab
Sun Nov 02 18:48:05
  • Previous
  • 1
  • More pages
  • 1297
  • 1298
  • 1299
  • More pages
  • 2117
  • Next