Ilya's latest interview video, full bilingual version (Chinese and English). Ilya: The Scaling Era Ends, Next Phase: How to Make Models Learn Like Humans This is Ilya's first systematic exposition of his comprehensive thoughts on current AI development, future intelligence, security alignment, and the evolution of human society since leaving OpenAI to found SSI. The main theme of the entire conversation revolved around three things: Why do current AI test scores are high, but their real-world performance is far from ideal? How can the "generalization and value function" of human intelligence inspire future AI training methods? What kind of "Safe Superintelligence" does SSI want to build? 1. The fundamental problem with the current state of AI: It can get high scores, but it can't do practical work. Current models (such as the GPT series) perform exceptionally well on test tasks (evals), but their actual economic impact is limited. The model may encounter a "cyclic error" in complex tasks—fixing one bug only introduces another. Ilya points out that this is because we are too focused on "rewarding humans" during the reinforcement learning phase, neglecting the generalization ability in the real world. 2. Pre-training vs. Reinforcement Learning: Where Does True Intelligence Found? Pre-training: Using "all data" without human selection, the model learns a broad projection of the human world. Reinforcement learning (RL): requires manually designed environment, and the goal is often set as "to make the model look better in evaluations". Ilya argues that this makes the model resemble a "student who only knows how to take exams," lacking genuine insight and transferability. 3. The key to human intelligence: Value Function and Emotion Lya proposed that the reason why humans can learn and generalize in a complex world is because we possess an "intrinsic value system". This system is emotion: Happiness → Positive Feedback; Anxiety → Reminding you of potential risks; Shame → Adjust social strategies; Curiosity → Inspires exploration. In reinforcement learning, this is like an implicit value function. It allows people to know in advance that "the direction is wrong," rather than waiting for a punishment signal. Therefore, he believes: "True intelligence is not just the ability to predict, but a constantly updated value system." If future AI can learn to "self-assess whether the task is in the right direction," it will possess "meaning-driven learning ability" like humans. 4. "The Scaling Era Has Ended, and the Research Era Has Arrived" Ilya offered sharp criticism of the current state of the AI industry. He stated that the past decade of AI progress can be divided into two eras: 2012–2020: The Research Era → Innovation comes from groundbreaking architectures (AlexNet, Transformer). 2020–2025: The Scaling Era → All efforts are focused on "accumulating data, computing power, and model parameters". He believes this trend has reached its peak: “Scaling has sucked the air out of innovation.” Currently: Computing power remains high, but the benefits of further resource accumulation are diminishing. The next breakthrough must return to the question of how to make the model learn like a human, rather than requiring more computing power. In other words, the focus has shifted from quantitative expansion to structural innovation. The key to future competition will not be computing power, but rather who can propose new learning principles. 5. Roadmap for the next ten years Ilya's prediction: In the next 5–20 years, AI will learn to learn in a human-like way; It can: Actively explore the world; Understanding the laws of physics and society; Self-reflection; It also enables cross-modal reasoning (multi-sensory integration). Once this system matures, it will bring about: Explosion of economic productivity; Education and research models have been completely reshaped; The relationship between humans and machines has entered the era of "co-intelligence". However, Ilya emphasized that such systems should be deployed gradually and transparently to allow the public and government to understand their capabilities and risks. He emphasized that SSI will proceed in a progressive, secure, and transparent manner: the capabilities, risks, and control strategies at each stage will be subject to external review. (As this is an AI translation, there may be minor errors. Please be aware of the potential for error.)
Full txiaohu.ai/c/ai/ilya-scal…/OcUOu4youtu.be/aR20FWCCjAshttps://t.co/cd3sNhZ9Jz