LogoThread Easy
  • 探索
  • 撰写 Thread
LogoThread Easy

您的一体化 Twitter 线程助手

© 2025 Thread Easy All Rights Reserved.

探索

最新在前,按卡片方式浏览线程

开启时会模糊预览图,关闭后正常显示

Of course, DYOR. Weight gain due to enhanced water retention, GI irritation, and headaches are the most commonly reported side effects. 

https://t.co/hp0Ult1gl5
https://t.co/GUJIBzE1EW
https://t.co/JVcJt5WHll

Of course, DYOR. Weight gain due to enhanced water retention, GI irritation, and headaches are the most commonly reported side effects. https://t.co/hp0Ult1gl5 https://t.co/GUJIBzE1EW https://t.co/JVcJt5WHll

But overall creatine is extremely well researched over long time frames, cheap, and readily available. If you take one supplement, make it creatine.

avatar for robin
robin
Tue Dec 16 20:26:35
Why I take creatine and you should too 🧵

Most people think creatine is for gym bros. It's more than that. An energy and cognitive boost. 

Trying some new post types on consumer health as I build in the space. Goal is to share research so people live longer, healthier lives.

Why I take creatine and you should too 🧵 Most people think creatine is for gym bros. It's more than that. An energy and cognitive boost. Trying some new post types on consumer health as I build in the space. Goal is to share research so people live longer, healthier lives.

The strength effects of creatine are well studied. The chart above shows average differences in upper body lifts on creatine (4.4kg or ~10 pounds). I don't need to explain more here, everyone's familiar with this.

avatar for robin
robin
Tue Dec 16 20:26:34
LLMs are not going to be AGI. We won't RL all the jobs out of existence with these things. The best thing you can say about LLMs => AGI is they will unlock the rest of whatever the actual AGI tech tree is and help us walk it.

LLMs are not going to be AGI. We won't RL all the jobs out of existence with these things. The best thing you can say about LLMs => AGI is they will unlock the rest of whatever the actual AGI tech tree is and help us walk it.

Author. Coder. CTO. θηριομάχης. Building: https://t.co/otXT4Wy6WR. Writing: https://t.co/dBPBtyCIHw.

avatar for Jon Stokes
Jon Stokes
Tue Dec 16 20:24:35
No. In ten years, people will still be doing many kinds of jobs and other people will be tweeting about how AGI is about to do all the jobs everywhere any year now.

No. In ten years, people will still be doing many kinds of jobs and other people will be tweeting about how AGI is about to do all the jobs everywhere any year now.

LLMs are not going to be AGI. We won't RL all the jobs out of existence with these things. The best thing you can say about LLMs => AGI is they will unlock the rest of whatever the actual AGI tech tree is and help us walk it.

avatar for Jon Stokes
Jon Stokes
Tue Dec 16 20:21:20
you tend to hear this a lot from people outside or new to ML, and I often point to a talk Ilya gave a few years back:

1) think of any decent deep neural net that has enough memory and sequential ops as just a big parallel computer 

2) training this neural net is doing search over computer programs that maximize your objective

3)unless you have some large bottleneck (and given you can successfully optimize this system) you’ll find that these parallel computers are highly robust to architectural changes.

4) this is because computers are great at simulating each other. your new architecture can usually be straightforwardly simulated ‘inside’ your old architecture.

5) it’s not that architecture doesn’t matter, but it mostly matters with respect to (1) fundamental bottlenecks in this parallel computer (2) modifications that make models easier to optimize, since this argument only holds if your optimization is good (3) compute efficiency/system efficiency wins that make learning easier or faster.

6) it’s quite possible that new architectures will lead to breakthroughs in machine learning, but we should first start with bottlenecks, not naturalist intuitions about the ‘form’ of AI should take. until you understand this it seems surprising that small models trained longer are better than undertrained big models, that depth and width are surprisingly interchangeable, that talking to a model with an MoE or sparse attention or linear attention is approximately the same iso evals.

you tend to hear this a lot from people outside or new to ML, and I often point to a talk Ilya gave a few years back: 1) think of any decent deep neural net that has enough memory and sequential ops as just a big parallel computer 2) training this neural net is doing search over computer programs that maximize your objective 3)unless you have some large bottleneck (and given you can successfully optimize this system) you’ll find that these parallel computers are highly robust to architectural changes. 4) this is because computers are great at simulating each other. your new architecture can usually be straightforwardly simulated ‘inside’ your old architecture. 5) it’s not that architecture doesn’t matter, but it mostly matters with respect to (1) fundamental bottlenecks in this parallel computer (2) modifications that make models easier to optimize, since this argument only holds if your optimization is good (3) compute efficiency/system efficiency wins that make learning easier or faster. 6) it’s quite possible that new architectures will lead to breakthroughs in machine learning, but we should first start with bottlenecks, not naturalist intuitions about the ‘form’ of AI should take. until you understand this it seems surprising that small models trained longer are better than undertrained big models, that depth and width are surprisingly interchangeable, that talking to a model with an MoE or sparse attention or linear attention is approximately the same iso evals.

dei ex machina @openai, past: posttraining o3/4o, sora 1 & 2, applied research

avatar for will depue
will depue
Tue Dec 16 20:20:49
RT @donpark: In my experience, DSPy is not just good for optimization but also great for unrooting pipeline issues much earlier, during the…

RT @donpark: In my experience, DSPy is not just good for optimization but also great for unrooting pipeline issues much earlier, during the…

Asst professor @MIT EECS & CSAIL (@nlp_mit). Author of https://t.co/VgyLxl0oa1 and https://t.co/ZZaSzaRaZ7 (@DSPyOSS). Prev: CS PhD @StanfordNLP. Research @Databricks.

avatar for Omar Khattab
Omar Khattab
Tue Dec 16 20:18:42
  • Previous
  • 1
  • More pages
  • 590
  • 591
  • 592
  • More pages
  • 5634
  • Next