开启时会模糊预览图,关闭后正常显示
![LeCun’s new company on physical AI with world models [9] looks a lot like our 2014 company on physical AI with world models [1] 😀 See also [2-8] - all references in the reply! LeCun’s new company on physical AI with world models [9] looks a lot like our 2014 company on physical AI with world models [1] 😀 See also [2-8] - all references in the reply!](/_next/image?url=https%3A%2F%2Fpbs.twimg.com%2Fmedia%2FG8tMtBCXEAAcnDn.jpg&w=3840&q=75)
REFERENCES [1] NNAISENSE, the AGI company for AI in the physical world, founded in 2014, based on neural network world models (NWMs). J. Schmidhuber (JS) was its President and Chief Scientist - see his NWM papers 1990-2015, e.g., [4-5], and the 2020 NNAISENSE web page in the Internet Archive https://t.co/j6xsLXHdPs (Lately, however, NNAISENSE has become less AGI-focused and more specialised, with a focus on asset management.) [2] JS, AI Blog (2022). LeCun's 2022 paper on autonomous machine intelligence rehashes but does not cite essential work of 1990-2015. https://t.co/byn3K3aSxK Years ago, JS published most of what LeCun calls his "main original contributions:" neural nets that learn multiple time scales and levels of abstraction, generate subgoals, use intrinsic motivation to improve world models, and plan (1990); controllers that learn informative predictable representations (1997), etc. This was also discussed on Hacker News, reddit, and in the media. LeCun also listed the "5 best ideas 2012-2022" without mentioning that most of them are from JS's lab, and older. Popular tweets on this: https://t.co/kn7KhFHLvw https://t.co/FxALILsNRu https://t.co/caTuctmztu https://t.co/Rpip8HBzPA [3] How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Swiss AI Lab IDSIA, 2023. https://t.co/Nz0fjc6kyx Best start with Section 3. See also [8]. Popular tweet on this: https://t.co/0fJVklXyOr [4] JS (1990). Making the world differentiable: on using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. TR FKI-126-90, TUM. This report used the terminology "world model” for a recurrent neural network that learns to predict the environment and the consequences of the actions of a separate controller neural net. It also introduced "artificial curiosity" and "intrinsic motivation" through generative adversarial networks. Led to lots of follow-up publications. [4b] JS (2002). Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002. Don't predict pixels - find predictable internal representations / abstractions of complex spatio-temporal events! [5] JS (2015). On Learning to Think: Algorithmic Information Theory for Novel Combinations of RL Controllers and Recurrent Neural World Models. ArXiv 1210.0118. Introducing a reinforcement learning (RL) prompt engineer and adaptive chain of thought: an RL neural net learns to query its "world model" net for abstract reasoning & decision making. Going beyond the 1990 neural world model [4] for millisecond-by-millisecond planning. See tweet for 10-year anniversary: https://t.co/3FYt4x2PMM [6] JS (2018). One Big Net For Everything. arXiv 1802.08864. Collapsing the reinforcement learner and the world model of [5] (e.g., a foundation model) into a single network, using JS's neural network distillation procedure of 1991. See DeepSeek tweet: https://t.co/HIVU8BWAaS [7] David Ha & JS. World Models. NeurIPS 2018. https://t.co/RrUNYSIz6n [8] Who invented convolutional neural networks? Technical Note IDSIA-17-25, IDSIA, 2025. https://t.co/HdCanIa4MN Popular tweets on this: https://t.co/6eDUT8qcNE https://t.co/chfcmk253b https://t.co/h27y6Ni2CA https://t.co/Rpip8HBzPA LinkedIn https://t.co/vzKQPhAGAy [9] Sifted dot eu (18 Dec 2024). Yann LeCun raising €500m at €3bn valuation for new AI startup. The outgoing Meta exec announced last month he was launching a new project to build “world models.” https://t.co/c21tW6sy3b Quote: "The new company will focus on “world models”, systems that can understand the physical world instead of merely generating text like today’s large-language models (LLMs)." See [1].


We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1


AI researcher & teacher @SCAI_ASU. Former President of @RealAAAI; Chair of @AAAS Sec T. Here to tweach #AI. YouTube Ch: https://t.co/4beUPOmf6y Bsky: rao2z


AI researcher & teacher @SCAI_ASU. Former President of @RealAAAI; Chair of @AAAS Sec T. Here to tweach #AI. YouTube Ch: https://t.co/4beUPOmf6y Bsky: rao2z


building workers observability @cloudflaredev, prev founder @baselimehq (acquired by cloudflare), prev aerodynamicist


building workers observability @cloudflaredev, prev founder @baselimehq (acquired by cloudflare), prev aerodynamicist
