LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Back in 2019, ARC 1 had one goal: to focus the attention of AI researchers towards the biggest bottleneck on the way to generality, the ability to adapt to novelty on the fly, which was entirely missing from the legacy deep learning paradigm.

Six years later, the field has responded. With test-time adaptation, we finally have reasoning models capable of genuine fluid intelligence.

While ARC 1 is now saturating, SotA models are not yet human-level on an efficiency basis. Meanwhile ARC 2 remains largely unsaturated, showing these models are still operating far below the upper bound of human-level fluid intelligence. We're still only at a fraction of what a human mind is capable of in a single sitting with no external tooling (a level which is itself significantly above a full score on ARC 2), so there's more work to be done.

And as we get closer to AGI, the challenge goes beyond fluid intelligence. The new bottlenecks are exploration, goal-setting, and interactive planning. We are releasing ARC 3 in Q1 2026 to target exactly this. It's time to trigger a new class of breakthroughs.

Back in 2019, ARC 1 had one goal: to focus the attention of AI researchers towards the biggest bottleneck on the way to generality, the ability to adapt to novelty on the fly, which was entirely missing from the legacy deep learning paradigm. Six years later, the field has responded. With test-time adaptation, we finally have reasoning models capable of genuine fluid intelligence. While ARC 1 is now saturating, SotA models are not yet human-level on an efficiency basis. Meanwhile ARC 2 remains largely unsaturated, showing these models are still operating far below the upper bound of human-level fluid intelligence. We're still only at a fraction of what a human mind is capable of in a single sitting with no external tooling (a level which is itself significantly above a full score on ARC 2), so there's more work to be done. And as we get closer to AGI, the challenge goes beyond fluid intelligence. The new bottlenecks are exploration, goal-setting, and interactive planning. We are releasing ARC 3 in Q1 2026 to target exactly this. It's time to trigger a new class of breakthroughs.

Co-founder @ndea. Co-founder @arcprize. Creator of Keras and ARC-AGI. Author of 'Deep Learning with Python'.

avatar for François Chollet
François Chollet
Thu Dec 11 18:24:30
RT @themkmaker: Someone created a nice Youtube video comparing Typeform and Youform on several factors and seems like we are winning (altho…

RT @themkmaker: Someone created a nice Youtube video comparing Typeform and Youform on several factors and seems like we are winning (altho…

I do things 👇 Form builder: https://t.co/xgd8ARmnxK Social media scheduler: https://t.co/bqC6HYRdk0 Private community: https://t.co/3oW03ira86

avatar for Davis from Youform & OneUp
Davis from Youform & OneUp
Thu Dec 11 18:22:35
@LTXStudio Also this is day 6 of @omooretweets and I featuring one cool consumer AI product every day.

We've got an awesome lineup - it's a mix of creative tools, social, productivity, health, and more. 

Follow along to see what we cover next 👀

@LTXStudio Also this is day 6 of @omooretweets and I featuring one cool consumer AI product every day. We've got an awesome lineup - it's a mix of creative tools, social, productivity, health, and more. Follow along to see what we cover next 👀

Partner @a16z AI 🤖 and twin to @omooretweets | Investor in @elevenlabsio, @krea_ai, @bfl_ml, @hedra_labs, @wabi, @WaveFormsAI, @ViggleAI, & more

avatar for Justine Moore
Justine Moore
Thu Dec 11 18:19:19
Wait what?

GPT-5.2 is branded as “The best model for coding and agentic tasks across industries.” Direct challenge to Anthropic!

Knowledge cutoff: Aug 31, 2025. It's a freshly pretrained model. Exciting.

Wait what? GPT-5.2 is branded as “The best model for coding and agentic tasks across industries.” Direct challenge to Anthropic! Knowledge cutoff: Aug 31, 2025. It's a freshly pretrained model. Exciting.

Holy moly. GPT-5.2 Thinking dominates every benchmark here!

avatar for Yuchen Jin
Yuchen Jin
Thu Dec 11 18:14:42
RT @a16z: America has lost non-nuclear deterrence. Castelion CEO Bryon Hargis wants to change that.

At @CastelionCorp, Bryon is building a…

RT @a16z: America has lost non-nuclear deterrence. Castelion CEO Bryon Hargis wants to change that. At @CastelionCorp, Bryon is building a…

GP @a16z — Building American Dynamism 🇺🇸 — Anthropologist — Formerly Founder/CEO @OpenDNS — Lokah Samastah Sukhino Bhavantu

avatar for David Ulevitch 🇺🇸
David Ulevitch 🇺🇸
Thu Dec 11 18:14:29
RT @victormustar: 🤯MUST TRY: Qwen-Image-i2L skips the training loop entirely. 

1-5 images in → LoRA weights out in seconds.

⬇️ Demo avail…

RT @victormustar: 🤯MUST TRY: Qwen-Image-i2L skips the training loop entirely. 1-5 images in → LoRA weights out in seconds. ⬇️ Demo avail…

AI research paper tweets, ML @Gradio (acq. by @HuggingFace 🤗) dm for promo ,submit papers here: https://t.co/UzmYN5XOCi

avatar for AK
AK
Thu Dec 11 18:12:31
  • Previous
  • 1
  • More pages
  • 977
  • 978
  • 979
  • More pages
  • 5634
  • Next