LogoThread Easy
  • Explorar
  • Criar thread
LogoThread Easy

Seu parceiro completo para threads do Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Love that Ruby 4.0 is still polishing the core fluidity of the language. After all these decades, we can still find subtle improvements that delights the discerning human eye.

Love that Ruby 4.0 is still polishing the core fluidity of the language. After all these decades, we can still find subtle improvements that delights the discerning human eye.

Father of three, Creator of Ruby on Rails + Omarchy, Co-owner & CTO of 37signals, Shopify director, NYT best-selling author, and Le Mans 24h class-winner.

avatar for DHH
DHH
Sun Dec 21 16:12:05
Marry well. No investment compounds faster than a partner who believes in you, supports you, and grows with you.

Marry well. No investment compounds faster than a partner who believes in you, supports you, and grows with you.

I build stuff. On my way to making $1M 💰 My projects 👇

avatar for Florin Pop 👨🏻‍💻
Florin Pop 👨🏻‍💻
Sun Dec 21 16:07:34
LeCun’s new company on physical AI with "world models" [9] looks a lot like our 2014 company [1]. He rehashes without attribution what I published decades ago [2][3]. See papers 1990-2018 on neural world models [4-7].

[1] NNAISENSE, the AGI company for AI in the physical world, founded in 2014, based on neural network world models (NWMs). J. Schmidhuber (JS) was its President and Chief Scientist - see his NWM papers 1990-2015, e.g., [4-5], and the 2020 NNAISENSE web page in the Wayback Machine Internet Archive - image attached. (Lately, however, NNAISENSE has become less AGI-focused and more specialised, with a focus on asset management.)

[2] JS, AI Blog (2022). LeCun's 2022 paper on autonomous machine intelligence rehashes but does not cite essential work of 1990-2015. 
Years ago, JS published most of what LeCun calls his "main original contributions:" neural nets that learn multiple time scales and levels of abstraction, generate subgoals, use intrinsic motivation to improve world models, and plan (1990); controllers that learn informative predictable representations (1997), etc. This was also discussed on Hacker News, reddit, and in the media. LeCun also listed the "5 best ideas 2012-2022" without mentioning that most of them are from JS's lab, and older. Popular tweets on this:
https://t.co/kn7KhFHLvw
https://t.co/FxALILsNRu
https://t.co/caTuctmztu
https://t.co/Rpip8HBzPA

[3] How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Swiss AI Lab IDSIA, 2023.
Best start with Section 3. See also [8]. Popular tweet on this:
https://t.co/0fJVklXyOr

[4] JS (1990). Making the world differentiable: on using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. TR FKI-126-90, TUM. This report used the terminology "world model” for a recurrent neural network that learns to predict the environment and the consequences of the actions of a separate controller neural net. It also introduced "artificial curiosity" and "intrinsic motivation" through generative adversarial networks. Led to lots of follow-up publications.

[4b]  JS (2002). Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002. Don't predict pixels - find predictable internal representations / abstractions of complex spatio-temporal events!

[5] JS (2015). On Learning to Think: Algorithmic Information Theory for Novel Combinations of RL Controllers and Recurrent Neural World Models. ArXiv 1210.0118. Introducing a reinforcement learning (RL) prompt engineer and adaptive chain of thought: an RL neural net learns to query its "world model" net for abstract reasoning & decision making. Going beyond the 1990 neural world model [4] for millisecond-by-millisecond planning. See tweet for 10-year anniversary: https://t.co/3FYt4x2PMM 

[6] JS (2018). One Big Net For Everything. arXiv 1802.08864. Collapsing the reinforcement learner and the world model of [5] (e.g., a foundation model) into a single network, using JS's neural network distillation procedure of 1991. See DeepSeek tweet: https://t.co/HIVU8BWAaS 

[7] David Ha & JS. World Models. NeurIPS 2018. 

[8] Who invented convolutional neural networks? 
Technical Note IDSIA-17-25, IDSIA, 2025.
Popular tweets on this:
https://t.co/6eDUT8qcNE
https://t.co/chfcmk253b
https://t.co/h27y6Ni2CA
https://t.co/Rpip8HBzPA

[9] Sifted dot eu (18 Dec 2024). Yann LeCun raising €500m at €3bn valuation for new AI startup. The outgoing Meta exec announced last month he was launching a new project to build “world models.” Quote: "The new company will focus on “world models”, systems that can understand the physical world instead of merely generating text like today’s large-language models (LLMs)." See [1].

LeCun’s new company on physical AI with "world models" [9] looks a lot like our 2014 company [1]. He rehashes without attribution what I published decades ago [2][3]. See papers 1990-2018 on neural world models [4-7]. [1] NNAISENSE, the AGI company for AI in the physical world, founded in 2014, based on neural network world models (NWMs). J. Schmidhuber (JS) was its President and Chief Scientist - see his NWM papers 1990-2015, e.g., [4-5], and the 2020 NNAISENSE web page in the Wayback Machine Internet Archive - image attached. (Lately, however, NNAISENSE has become less AGI-focused and more specialised, with a focus on asset management.) [2] JS, AI Blog (2022). LeCun's 2022 paper on autonomous machine intelligence rehashes but does not cite essential work of 1990-2015. Years ago, JS published most of what LeCun calls his "main original contributions:" neural nets that learn multiple time scales and levels of abstraction, generate subgoals, use intrinsic motivation to improve world models, and plan (1990); controllers that learn informative predictable representations (1997), etc. This was also discussed on Hacker News, reddit, and in the media. LeCun also listed the "5 best ideas 2012-2022" without mentioning that most of them are from JS's lab, and older. Popular tweets on this: https://t.co/kn7KhFHLvw https://t.co/FxALILsNRu https://t.co/caTuctmztu https://t.co/Rpip8HBzPA [3] How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Swiss AI Lab IDSIA, 2023. Best start with Section 3. See also [8]. Popular tweet on this: https://t.co/0fJVklXyOr [4] JS (1990). Making the world differentiable: on using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. TR FKI-126-90, TUM. This report used the terminology "world model” for a recurrent neural network that learns to predict the environment and the consequences of the actions of a separate controller neural net. It also introduced "artificial curiosity" and "intrinsic motivation" through generative adversarial networks. Led to lots of follow-up publications. [4b] JS (2002). Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002. Don't predict pixels - find predictable internal representations / abstractions of complex spatio-temporal events! [5] JS (2015). On Learning to Think: Algorithmic Information Theory for Novel Combinations of RL Controllers and Recurrent Neural World Models. ArXiv 1210.0118. Introducing a reinforcement learning (RL) prompt engineer and adaptive chain of thought: an RL neural net learns to query its "world model" net for abstract reasoning & decision making. Going beyond the 1990 neural world model [4] for millisecond-by-millisecond planning. See tweet for 10-year anniversary: https://t.co/3FYt4x2PMM [6] JS (2018). One Big Net For Everything. arXiv 1802.08864. Collapsing the reinforcement learner and the world model of [5] (e.g., a foundation model) into a single network, using JS's neural network distillation procedure of 1991. See DeepSeek tweet: https://t.co/HIVU8BWAaS [7] David Ha & JS. World Models. NeurIPS 2018. [8] Who invented convolutional neural networks? Technical Note IDSIA-17-25, IDSIA, 2025. Popular tweets on this: https://t.co/6eDUT8qcNE https://t.co/chfcmk253b https://t.co/h27y6Ni2CA https://t.co/Rpip8HBzPA [9] Sifted dot eu (18 Dec 2024). Yann LeCun raising €500m at €3bn valuation for new AI startup. The outgoing Meta exec announced last month he was launching a new project to build “world models.” Quote: "The new company will focus on “world models”, systems that can understand the physical world instead of merely generating text like today’s large-language models (LLMs)." See [1].

Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.

avatar for Jürgen Schmidhuber
Jürgen Schmidhuber
Sun Dec 21 16:00:03
💾 Okay having nobody online to chat on AIM is boring

So I asked AI to write an AOL Instant Messenger bot called @pieterbot, and I made an account for it, and it one-shotted it in Python and IT WORKS!!!

So now you can chat on AIM on https://t.co/M1hEUBAynC with an LLM that is fully self-aware it is an AIM bot 🤠👍

💾 Okay having nobody online to chat on AIM is boring So I asked AI to write an AOL Instant Messenger bot called @pieterbot, and I made an account for it, and it one-shotted it in Python and IT WORKS!!! So now you can chat on AIM on https://t.co/M1hEUBAynC with an LLM that is fully self-aware it is an AIM bot 🤠👍

🇪🇺https://t.co/NdorAWqJC3 @euacc 📸https://t.co/lAyoqmSBRX $118K/m 🏡https://t.co/1oqUgfD6CZ $36K/m 🛰https://t.co/ZHSvI2wjyW $43K/m 🌍https://t.co/UXK5AFqCaQ $15K/m 👙https://t.co/RyXpqGuFM3 $14K/m 💾https://t.co/M1hEUBAynC $6K/m

avatar for @levelsio
@levelsio
Sun Dec 21 15:55:36
Liked this story? Follow me here @sobedominik for more. 

Also check out my newsletter where I occasionally share my startup lessons in even more detail. Ups and downs included ツ

https://t.co/FNU2RWDp0B

Liked this story? Follow me here @sobedominik for more. Also check out my newsletter where I occasionally share my startup lessons in even more detail. Ups and downs included ツ https://t.co/FNU2RWDp0B

⚡ Founder & 🌊 Surfer bootstrapping SaaS. ✍️ Notion ➯ Help Center @HelpkitHQ 💰 Reddit ➯ Customers https://t.co/3kdqfXzlsK🔋 Battery ➯ Alerts https://t.co/cX8QhAoG55

avatar for Dominik Sobe ツ
Dominik Sobe ツ
Sun Dec 21 15:51:58
> Zhang Ziyu
> Listed height 220 cm (7 ft 3 in)
> Listed weight 238 lb (108 kg)
Man, I was told that Shandong people are tall, but a 7'3" girl is a bit too much.
> Her parents were both basketball players, mother played for the China national team
yeah, another Yao Ming case

> Zhang Ziyu > Listed height 220 cm (7 ft 3 in) > Listed weight 238 lb (108 kg) Man, I was told that Shandong people are tall, but a 7'3" girl is a bit too much. > Her parents were both basketball players, mother played for the China national team yeah, another Yao Ming case

We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1

avatar for Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Sun Dec 21 15:39:44
  • Previous
  • 1
  • More pages
  • 202
  • 203
  • 204
  • More pages
  • 5634
  • Next