LogoThread Easy
  • Explorar
  • Criar thread
LogoThread Easy

Seu parceiro completo para threads do Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Decided to turn my NeurIPS recap into a game ✨

I took real photos from the conference and transformed them with two of my favorite models: Flux 2 and Veo 3.

Even squeezed in a mention of the topic du jour - continual learning!

Decided to turn my NeurIPS recap into a game ✨ I took real photos from the conference and transformed them with two of my favorite models: Flux 2 and Veo 3. Even squeezed in a mention of the topic du jour - continual learning!

Example of image transformation (w/ Flux 2 from @bfl_ml): “Turn this photo into a pixel style game” Having a ton of people in the image can make video generation ~noisy~, so you can also ask to “make it less crowded.” Do this for a start + end clip for each section.

avatar for Justine Moore
Justine Moore
Sun Dec 07 18:15:20
Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask:

"What do you think about xyz"?

There is no "you". Next time try:

"What would be a good group of people to explore xyz? What would they say?"

The LLM can channel/simulate many perspectives but it hasn't "thought about" xyz for a while and over time and formed its own opinions in the way we're used to. If you force it via the use of "you", it will give you something by adopting a personality embedding vector implied by the statistics of its finetuning data and then simulate that. It's fine to do, but there is a lot less mystique to it than I find people naively attribute to "asking an AI".

Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask: "What do you think about xyz"? There is no "you". Next time try: "What would be a good group of people to explore xyz? What would they say?" The LLM can channel/simulate many perspectives but it hasn't "thought about" xyz for a while and over time and formed its own opinions in the way we're used to. If you force it via the use of "you", it will give you something by adopting a personality embedding vector implied by the statistics of its finetuning data and then simulate that. It's fine to do, but there is a lot less mystique to it than I find people naively attribute to "asking an AI".

Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets.

avatar for Andrej Karpathy
Andrej Karpathy
Sun Dec 07 18:13:45
Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask:

"What do you think about xyz"?

There is no "you". Next time try:

"What would be a good group of people to explore xyz? What would they say?"

The LLM can channel/simulate many perspectives but it hasn't "thought about" xyz for a while and over time and formed its own opinions in the way we're used to. If you force it via the use of "you", it will give you something by adopting a personality embedding vector implied by the statistics of its finetuning data and then simulate that. It's fine to do, but there is a lot less mystique to it than I find people naively attribute to "asking an AI".

Don't think of LLMs as entities but as simulators. For example, when exploring a topic, don't ask: "What do you think about xyz"? There is no "you". Next time try: "What would be a good group of people to explore xyz? What would they say?" The LLM can channel/simulate many perspectives but it hasn't "thought about" xyz for a while and over time and formed its own opinions in the way we're used to. If you force it via the use of "you", it will give you something by adopting a personality embedding vector implied by the statistics of its finetuning data and then simulate that. It's fine to do, but there is a lot less mystique to it than I find people naively attribute to "asking an AI".

Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets.

avatar for Andrej Karpathy
Andrej Karpathy
Sun Dec 07 18:13:45
To get things started, here is a @lawfare opinion piece with a different perspective - https://t.co/UxXgwE4fut

To get things started, here is a @lawfare opinion piece with a different perspective - https://t.co/UxXgwE4fut

And here is the author's rebuttal - https://t.co/MqJbPwXr1K

avatar for Scott Kupor
Scott Kupor
Sun Dec 07 18:13:42
Understandably, tons of controversy and divergent opinions on birthright citizenship (and this academic piece has received its fair share), but in light of SCOTUS decision to review, can history/legal wonks who understand this way more than I share their papers and perspectives?    https://t.co/6VEQTZRyuU

Understandably, tons of controversy and divergent opinions on birthright citizenship (and this academic piece has received its fair share), but in light of SCOTUS decision to review, can history/legal wonks who understand this way more than I share their papers and perspectives? https://t.co/6VEQTZRyuU

To get things started, here is a @lawfare opinion piece with a different perspective - https://t.co/UxXgwE4fut

avatar for Scott Kupor
Scott Kupor
Sun Dec 07 18:13:41
Axiom's Putnam statement formalization effort was genuinely nerdy fun. It's like a proving Hackathon: Prove-a-ton.

Our team ensuring this year's problems formalization quality includes graduate students from Imperial College London, Cambridge, MIT, and Humboldt.

All are active Lean community members and Zulip users. Many are initial Mathlib contributors. Some are serious researchers in algebraic geometry and number theory. Some coach national Olympiad teams, and some - this is the part they asked me to include - did not do particularly well on Olympiad math.

Turning math problems that challenge you into Lean can be oddly therapeutic. 

AI for mathematics is expanding the incredible joy math can bring. There's something beautiful about mathematicians at different career stages, with different relationships to competition math, and of different programming experience, coming together to turn problems into programming languages.

Axiom's Putnam statement formalization effort was genuinely nerdy fun. It's like a proving Hackathon: Prove-a-ton. Our team ensuring this year's problems formalization quality includes graduate students from Imperial College London, Cambridge, MIT, and Humboldt. All are active Lean community members and Zulip users. Many are initial Mathlib contributors. Some are serious researchers in algebraic geometry and number theory. Some coach national Olympiad teams, and some - this is the part they asked me to include - did not do particularly well on Olympiad math. Turning math problems that challenge you into Lean can be oddly therapeutic. AI for mathematics is expanding the incredible joy math can bring. There's something beautiful about mathematicians at different career stages, with different relationships to competition math, and of different programming experience, coming together to turn problems into programming languages.

@axiommathai : careers@axiommath.ai

avatar for Carina Hong
Carina Hong
Sun Dec 07 18:12:20
  • Previous
  • 1
  • More pages
  • 1339
  • 1340
  • 1341
  • More pages
  • 5634
  • Next