LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

The strange satisfaction of finding a enormous bug in the code that gave disappointing experimental results.

The strange satisfaction of finding a enormous bug in the code that gave disappointing experimental results.

Research Scientist @meta (FAIR), Prof. @Unige_en, co-founder @nc_shape. I like reality.

avatar for François Fleuret
François Fleuret
Sun Dec 21 19:25:16
RT @eddwinchester: When you’re getting your Pizza, but you don’t want the Cat to escape from the door.

RT @eddwinchester: When you’re getting your Pizza, but you don’t want the Cat to escape from the door.

avatar for Joscha Bach
Joscha Bach
Sun Dec 21 19:25:09
Momentum = velocity in product iteration & distribution 👇🏼

“95% (of growth) comes from launching new features and products”

“Lovable’s main growth and retention strategy: ship features fast enough that customers feel the product is always alive.“

Momentum = velocity in product iteration & distribution 👇🏼 “95% (of growth) comes from launching new features and products” “Lovable’s main growth and retention strategy: ship features fast enough that customers feel the product is always alive.“

Partner @a16z | Investor in @elevenlabsio, @function, @cluely, @trymirage, @slingshotai_inc, @partiful & more | Growth @Snap & CFO @livebungalow

avatar for Bryan Kim
Bryan Kim
Sun Dec 21 19:25:02
Just wow - "wild efficiency gains" with Thinking Machine's approach 🚀🚀🚀

"Post-train with MOPD: We adopted On-Policy-Distillation from Thinking Machine to merge multiple RL models, and the efficiency gains were wild. 

We matched the teacher model's performance using less than 1/50th the compute of a standard SFT+RL pipeline. 

There’s a clear path here for a self-reinforcing loop where the student evolves into a stronger teacher."

Just wow - "wild efficiency gains" with Thinking Machine's approach 🚀🚀🚀 "Post-train with MOPD: We adopted On-Policy-Distillation from Thinking Machine to merge multiple RL models, and the efficiency gains were wild. We matched the teacher model's performance using less than 1/50th the compute of a standard SFT+RL pipeline. There’s a clear path here for a self-reinforcing loop where the student evolves into a stronger teacher."

Artificial Intelligence @amazon, @awscloud Reinforcement Learning, OSS AI, General Purpose Agents, Autonomy All views personal!

avatar for GDP
GDP
Sun Dec 21 19:24:10
RT @TobyMeadows: @burakkayaburak @ElliotGlazer @JasonRute @IsaacKing314 @tracewoodgrains I've never met an analyst who cares about this.

I…

RT @TobyMeadows: @burakkayaburak @ElliotGlazer @JasonRute @IsaacKing314 @tracewoodgrains I've never met an analyst who cares about this. I…

Root node of the web of threads: https://t.co/ifH80GcLpo

avatar for James Torre
James Torre
Sun Dec 21 19:23:33
RT @alihkw_: being able to watch this pre kscale would have saved me so much pain lol. great stuff!!

RT @alihkw_: being able to watch this pre kscale would have saved me so much pain lol. great stuff!!

gpus and tractors Neural networks from Scratch book: https://t.co/hyMkWyUP7R https://t.co/8WGZRkUGsn

avatar for Harrison Kinsley
Harrison Kinsley
Sun Dec 21 19:22:51
  • Previous
  • 1
  • More pages
  • 188
  • 189
  • 190
  • More pages
  • 5634
  • Next