LogoThread Easy
  • 発見
  • スレッド作成
LogoThread Easy

Twitter スレッドの万能パートナー

© 2025 Thread Easy All Rights Reserved.

探索

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Every Saturday, I send 175,000+ entrepreneurs the strategies I use to turn expertise into income.

Read it, apply it, and your business gets better.

Always 4 minutes (or less) to read.

Free to join here: https://t.co/idNByMf6vn

Every Saturday, I send 175,000+ entrepreneurs the strategies I use to turn expertise into income. Read it, apply it, and your business gets better. Always 4 minutes (or less) to read. Free to join here: https://t.co/idNByMf6vn

The $10M Solopreneur | Helping 100,000+ experts turn their expertise into income.

avatar for Justin Welsh
Justin Welsh
Sun Dec 21 17:00:12
never let the groypers live this down. do not accept weakness in character.

this is who they are once you strip out the viewbots and qatar money.

nothing more destructive than weak men given an inch of influence.

never let the groypers live this down. do not accept weakness in character. this is who they are once you strip out the viewbots and qatar money. nothing more destructive than weak men given an inch of influence.

Prompt engineering reality🪄🧠 ✨Building AGI for work✨

avatar for Soham Sarkar
Soham Sarkar
Sun Dec 21 16:56:25
Parents: Are there any apps that will keep my 16-month-old occupied on a flight?

We have a 14-hour flight coming up. 😬

Parents: Are there any apps that will keep my 16-month-old occupied on a flight? We have a 14-hour flight coming up. 😬

Optimistic about the future! ✨ Sci-Fi writer, husband to @huanancy, dad to a 👶, founder of https://t.co/rv3dfXJBbx (AI for writing) & Photojojo (sold 2014 🥲).

avatar for Amit Gupta
Amit Gupta
Sun Dec 21 16:47:47
i think my issue is i like the unix mentality too much and want my agentic scaffold to be as barebones as possible
checkpointing? git exists
lsp support? why, pyright is set up in CI and the agent can trigger that to check itself

to me the less features the better

i think my issue is i like the unix mentality too much and want my agentic scaffold to be as barebones as possible checkpointing? git exists lsp support? why, pyright is set up in CI and the agent can trigger that to check itself to me the less features the better

codegen & vibe @mistralai, husband

avatar for Q
Q
Sun Dec 21 16:46:18
"'I was shaking. Because I know a lot of people throw around the words ‘intergenerational trauma.’ But our DNA has memory.'" This is the New York Times.

"'I was shaking. Because I know a lot of people throw around the words ‘intergenerational trauma.’ But our DNA has memory.'" This is the New York Times.

https://t.co/tRw21lEMcS

avatar for James Miller
James Miller
Sun Dec 21 16:45:25
REFERENCES

[1] NNAISENSE, the AGI company for AI in the physical world, founded in 2014, based on neural network world models (NWMs). J. Schmidhuber (JS) was its President and Chief Scientist - see his NWM papers 1990-2015, e.g., [4-5], and the 2020 NNAISENSE web page in the Internet Archive
https://t.co/j6xsLXHdPs (Lately, however, NNAISENSE has become less AGI-focused and more specialised, with a focus on asset management.)

[2] JS, AI Blog (2022). LeCun's 2022 paper on autonomous machine intelligence rehashes but does not cite essential work of 1990-2015. 
https://t.co/byn3K3aSxK
Years ago, JS published most of what LeCun calls his "main original contributions:" neural nets that learn multiple time scales and levels of abstraction, generate subgoals, use intrinsic motivation to improve world models, and plan (1990); controllers that learn informative predictable representations (1997), etc. This was also discussed on Hacker News, reddit, and in the media. LeCun also listed the "5 best ideas 2012-2022" without mentioning that most of them are from JS's lab, and older. Popular tweets on this:
https://t.co/kn7KhFHLvw
https://t.co/FxALILsNRu
https://t.co/caTuctmztu
https://t.co/Rpip8HBzPA

[3] How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Swiss AI Lab IDSIA, 2023.
https://t.co/Nz0fjc6kyx
Best start with Section 3. See also [8]. Popular tweet on this:
https://t.co/0fJVklXyOr

[4] JS (1990). Making the world differentiable: on using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. TR FKI-126-90, TUM. This report used the terminology "world model” for a recurrent neural network that learns to predict the environment and the consequences of the actions of a separate controller neural net. It also introduced "artificial curiosity" and "intrinsic motivation" through generative adversarial networks. Led to lots of follow-up publications.

[4b]  JS (2002). Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002. Don't predict pixels - find predictable internal representations / abstractions of complex spatio-temporal events!

[5] JS (2015). On Learning to Think: Algorithmic Information Theory for Novel Combinations of RL Controllers and Recurrent Neural World Models. ArXiv 1210.0118. Introducing a reinforcement learning (RL) prompt engineer and adaptive chain of thought: an RL neural net learns to query its "world model" net for abstract reasoning & decision making. Going beyond the 1990 neural world model [4] for millisecond-by-millisecond planning. See tweet for 10-year anniversary: https://t.co/3FYt4x2PMM 

[6] JS (2018). One Big Net For Everything. arXiv 1802.08864. Collapsing the reinforcement learner and the world model of [5] (e.g., a foundation model) into a single network, using JS's neural network distillation procedure of 1991. See DeepSeek tweet: https://t.co/HIVU8BWAaS 

[7] David Ha & JS. World Models. NeurIPS 2018. https://t.co/RrUNYSIz6n 

[8] Who invented convolutional neural networks? 
Technical Note IDSIA-17-25, IDSIA, 2025.
https://t.co/HdCanIa4MN
Popular tweets on this:
https://t.co/6eDUT8qcNE
https://t.co/chfcmk253b
https://t.co/h27y6Ni2CA
https://t.co/Rpip8HBzPA
LinkedIn https://t.co/vzKQPhAGAy

[9] Sifted dot eu (18 Dec 2024). Yann LeCun raising €500m at €3bn valuation for new AI startup. The outgoing Meta exec announced last month he was launching a new project to build “world models.” https://t.co/c21tW6sy3b Quote: "The new company will focus on “world models”, systems that can understand the physical world instead of merely generating text like today’s large-language models (LLMs)." See [1].

REFERENCES [1] NNAISENSE, the AGI company for AI in the physical world, founded in 2014, based on neural network world models (NWMs). J. Schmidhuber (JS) was its President and Chief Scientist - see his NWM papers 1990-2015, e.g., [4-5], and the 2020 NNAISENSE web page in the Internet Archive https://t.co/j6xsLXHdPs (Lately, however, NNAISENSE has become less AGI-focused and more specialised, with a focus on asset management.) [2] JS, AI Blog (2022). LeCun's 2022 paper on autonomous machine intelligence rehashes but does not cite essential work of 1990-2015. https://t.co/byn3K3aSxK Years ago, JS published most of what LeCun calls his "main original contributions:" neural nets that learn multiple time scales and levels of abstraction, generate subgoals, use intrinsic motivation to improve world models, and plan (1990); controllers that learn informative predictable representations (1997), etc. This was also discussed on Hacker News, reddit, and in the media. LeCun also listed the "5 best ideas 2012-2022" without mentioning that most of them are from JS's lab, and older. Popular tweets on this: https://t.co/kn7KhFHLvw https://t.co/FxALILsNRu https://t.co/caTuctmztu https://t.co/Rpip8HBzPA [3] How 3 Turing awardees republished key methods and ideas whose creators they failed to credit. Technical Report IDSIA-23-23, Swiss AI Lab IDSIA, 2023. https://t.co/Nz0fjc6kyx Best start with Section 3. See also [8]. Popular tweet on this: https://t.co/0fJVklXyOr [4] JS (1990). Making the world differentiable: on using fully recurrent self-supervised neural networks for dynamic reinforcement learning and planning in non-stationary environments. TR FKI-126-90, TUM. This report used the terminology "world model” for a recurrent neural network that learns to predict the environment and the consequences of the actions of a separate controller neural net. It also introduced "artificial curiosity" and "intrinsic motivation" through generative adversarial networks. Led to lots of follow-up publications. [4b] JS (2002). Exploring the Predictable. In Ghosh, S. Tsutsui, eds., Advances in Evolutionary Computing, p. 579-612, Springer, 2002. Don't predict pixels - find predictable internal representations / abstractions of complex spatio-temporal events! [5] JS (2015). On Learning to Think: Algorithmic Information Theory for Novel Combinations of RL Controllers and Recurrent Neural World Models. ArXiv 1210.0118. Introducing a reinforcement learning (RL) prompt engineer and adaptive chain of thought: an RL neural net learns to query its "world model" net for abstract reasoning & decision making. Going beyond the 1990 neural world model [4] for millisecond-by-millisecond planning. See tweet for 10-year anniversary: https://t.co/3FYt4x2PMM [6] JS (2018). One Big Net For Everything. arXiv 1802.08864. Collapsing the reinforcement learner and the world model of [5] (e.g., a foundation model) into a single network, using JS's neural network distillation procedure of 1991. See DeepSeek tweet: https://t.co/HIVU8BWAaS [7] David Ha & JS. World Models. NeurIPS 2018. https://t.co/RrUNYSIz6n [8] Who invented convolutional neural networks? Technical Note IDSIA-17-25, IDSIA, 2025. https://t.co/HdCanIa4MN Popular tweets on this: https://t.co/6eDUT8qcNE https://t.co/chfcmk253b https://t.co/h27y6Ni2CA https://t.co/Rpip8HBzPA LinkedIn https://t.co/vzKQPhAGAy [9] Sifted dot eu (18 Dec 2024). Yann LeCun raising €500m at €3bn valuation for new AI startup. The outgoing Meta exec announced last month he was launching a new project to build “world models.” https://t.co/c21tW6sy3b Quote: "The new company will focus on “world models”, systems that can understand the physical world instead of merely generating text like today’s large-language models (LLMs)." See [1].

Invented principles of meta-learning (1987), GANs (1990), Transformers (1991), very deep learning (1991), etc. Our AI is used many billions of times every day.

avatar for Jürgen Schmidhuber
Jürgen Schmidhuber
Sun Dec 21 16:32:57
  • Previous
  • 1
  • More pages
  • 199
  • 200
  • 201
  • More pages
  • 5634
  • Next