Keep on to blur preview images; turn off to show them clearly

独立科技网站 - 蓝点网 / 感谢关注 订阅频道:https://t.co/xzeoUEoPcU 联系方式:https://t.co/LJK1g3biPp


从投资领域转到创业:找工作、找面试题、改简历、模拟面试. 创业(冷启动)|AI , AIGC | 安全技术|RAG | 时空智能 | 认知心理学|智能体 | 生命科学 | 强化学习 I built open source software at https://t.co/b69DXZhcyR


Founder @Tailscan for Tailwind CSS Co-Founder @Lexboostai + many random side projects: https://t.co/TPk3m9LhZa, https://t.co/uW4shohLZq, https://t.co/BFujf7veHX


How Kimi-K2-Thinking compares to MiniMax M2 on size? 2/n 1. MiniMax M2 has 10B active and 230B total parameters with full attention. 2. Kimi K2 has 35B active and 1 trillion total parameters. Both have most of their weights in 8bits. That mean M2 will be much easier to host and its KV cache will be much more compact. MiniMax M2 uses full attention, would be interesting to see if Kimi-M2 has done something interesting to the attention layer. (for this calculations I am assuming Kimi-K2-Thinking is based on Kimi-K2-Base)


Indie hacker, solopreneur building a portfolio of products & SaaS: 🔌 https://t.co/rLuxXYrO8V 🟩 https://t.co/f93dEmKKZU 📋 https://t.co/zmLERuwToj ✍️ https://t.co/MW9HQLABxB 🏡 https://t.co/EtGIs3qLMx


AI @amazon. All views personal!
