Keep on to blur preview images; turn off to show them clearly
I just launched a new startup today → VIDEO AI ME → Create your own AI clone and make studio-quality videos that go viral → https://t.co/eU2hD1uyoV


Artisanal baker of reasoning models @pleiasfr


https://t.co/dNyvDu6yiB


We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1

![10 years ago: the reinforcement learning (RL) prompt engineer [1] (Sec. 5.3). Adaptive chain of thought: an RL neural net learns to query its "world model" net for abstract reasoning & decision making. Going beyond the 1990 neural world model [2] for millisecond-by-millisecond planning and the 1991 adaptive neural subgoal generator [3,4] for hierarchical planning.
[1] J. Schmidhuber (JS, 2015). On Learning to Think: Algorithmic Information Theory for Novel Combinations of RL Controllers and Recurrent Neural World Models. ArXiv 1210.0118
[2] JS (1990). Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. TR FKI-126-90, TUM. (This report also introduced artificial curiosity and intrinsic motivation through generative adversarial networks.)
[3] JS (1991). Learning to generate sub-goals for action sequences. Proc. ICANN'91, p. 967-972.
[4] JS & R. Wahnsiedler (1992). Planning simple trajectories using neural subgoal generators. Proc. SAB'92, p 196-202, MIT Press. 10 years ago: the reinforcement learning (RL) prompt engineer [1] (Sec. 5.3). Adaptive chain of thought: an RL neural net learns to query its "world model" net for abstract reasoning & decision making. Going beyond the 1990 neural world model [2] for millisecond-by-millisecond planning and the 1991 adaptive neural subgoal generator [3,4] for hierarchical planning.
[1] J. Schmidhuber (JS, 2015). On Learning to Think: Algorithmic Information Theory for Novel Combinations of RL Controllers and Recurrent Neural World Models. ArXiv 1210.0118
[2] JS (1990). Making the world differentiable: On using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. TR FKI-126-90, TUM. (This report also introduced artificial curiosity and intrinsic motivation through generative adversarial networks.)
[3] JS (1991). Learning to generate sub-goals for action sequences. Proc. ICANN'91, p. 967-972.
[4] JS & R. Wahnsiedler (1992). Planning simple trajectories using neural subgoal generators. Proc. SAB'92, p 196-202, MIT Press.](/_next/image?url=https%3A%2F%2Fpbs.twimg.com%2Fmedia%2FG62qakxWcAATu16.png&w=3840&q=75)
Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.


Quaker Libertarian Vegetarian Domain Investor #chess player @impervious, https://t.co/hbatJ5R7Mo, https://t.co/1kKnSOfXQL. Previous https://t.co/I6PIEzagKA
