Andrej Karpathy's 2025 LLM Year-End Review: 6 "Paradigm Shifts" 1. RLVR: In 2025, reinforcement learning transitioned from verifiable reward (RLVR) to the new standard phase of LLM training. By optimizing rewards over a long period in objective domains such as mathematics/code, models naturally develop human-like "reasoning" strategies and drive major capability improvements throughout the year. 2. Ghosts vs. Animals / Uneven Intelligence: By 2025, the industry will begin to realize that LLM is a "summoned ghost" rather than an "evolved animal." Its intelligence is extremely uneven, and while it is genius-level in verifiable fields, it is easily fooled, leading to a complete loss of trust in benchmarks. 3. Emerging LLM Application Layer: Represented by Cursor, a new LLM application layer emerged in 2025. Through context engineering, multi-call orchestration, dedicated interfaces, and autonomous sliders, it transforms the "general college students" organized by the basic model into "professional teams" in specific vertical fields. 4. Local AI Agents: Claude Code presents for the first time a convincing demonstration of locally running LLM agents that can deeply integrate into the user's private environment and data, transforming AI interaction from a cloud-based chat website into a "little sprite residing on the computer". 5. Vibe Coding: In 2025, "vibe coding" will emerge, allowing people to generate code simply by describing their intentions in natural language, democratizing programming, exploding professional productivity, and making code cheap and disposable. 6. The prototype of LLM GUI: Nano Banana foreshadowed the era of graphical user interfaces for LLM, which allowed models to output information in a visual format that humans prefer by deeply integrating text, image generation and world knowledge.
Loading thread detail
Fetching the original tweets from X for a clean reading view.
Hang tight—this usually only takes a few seconds.
