LogoThread Easy
  • Explore
  • Thread Compose
LogoThread Easy

Your All-in-One Twitter Thread Companion

© 2025 Thread Easy All Rights Reserved.

Explore

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Claude Opus 4.5 虽强,Claude Code 框架才是关键 —— Yam Peleg 使用 Claude Opus 4.5 后的感受

核心观点:模型优秀,但框架才是革命性突破
· Opus 4.5 的定位:Peleg 认为 Opus 并非“万能最佳”,而是在大多数日常编码任务中表现出色。它与 OpenAI Codex-Max 和 Google Gemini-3 各有专长,三者性能接近,但风格迥异:
  · Gemini-3:互联网搜索无敌(内置 GoogleSearch 工具优于其他模型的原生搜索Peleg 额外用 Perplexity 作为备选)。
  · Codex-Max:学术研究首选,处理复杂文献和分析更高效。
  · Opus 4.5:通用性强,尤其在编码和智能体任务中“感觉像最佳”。Peleg 已养成“直觉”:根据任务切换模型。
· Claude Code 的亮点:提供浏览器控制、配置文件管理、子智能体协作、自动化执行等功能,让 Opus 超越单纯“编码工具”,成为“全电脑使用智能体”。Peleg 曾自定义 fork 版本,但现在无需修改源码,就能通过 Agent SDK 高度定制。即便早期仅用较弱的 Sonnet 模型,这个框架已不可或缺;如今配 Opus,更是“无人能及”。
  · 优势:支持“发送并遗忘”式任务(如后台运行复杂脚本),并处理多子智能体并行。
  · 痛点:仍有 bug,例如多子智能体高负载时可能耗尽 256GB RAM,导致系统崩溃。
· 与其他工具对比:Opus + Claude Code 像“全能助手”,而 Codex 或 Gemini 更像“专业编码器”。作者不care Claude Code 非开源,因为灵活性已足够。

Peleg 的 AI 编码工作流
附带实用分享:Peleg 在 2025 年已将 LLM 融入日常,但仍主导核心:
· 高难度任务:全手动(更快、更准),不委托。
· 中低难度:拆解为小模块委托智能体,严格监督/编辑输出。AI 生成的代码风格已大幅改善,许多“一次性 hack” 无需改动。
· 规划与架构:始终自控,不外包。
Peleg 从“一年零使用”转向“每日依赖”,但强调“微观管理”而非随意“vibe coding”。

为什么重要?
这也反映了 2025 年 AI 生态的微妙转变:模型竞争白热化,但智能体框架(如 Claude Code)才是生产力“倍增器”。它反映开发者从“工具使用者”向“智能体协作者”的演进,尤其在编码/自动化领域。

Claude Opus 4.5 虽强,Claude Code 框架才是关键 —— Yam Peleg 使用 Claude Opus 4.5 后的感受 核心观点:模型优秀,但框架才是革命性突破 · Opus 4.5 的定位:Peleg 认为 Opus 并非“万能最佳”,而是在大多数日常编码任务中表现出色。它与 OpenAI Codex-Max 和 Google Gemini-3 各有专长,三者性能接近,但风格迥异: · Gemini-3:互联网搜索无敌(内置 GoogleSearch 工具优于其他模型的原生搜索Peleg 额外用 Perplexity 作为备选)。 · Codex-Max:学术研究首选,处理复杂文献和分析更高效。 · Opus 4.5:通用性强,尤其在编码和智能体任务中“感觉像最佳”。Peleg 已养成“直觉”:根据任务切换模型。 · Claude Code 的亮点:提供浏览器控制、配置文件管理、子智能体协作、自动化执行等功能,让 Opus 超越单纯“编码工具”,成为“全电脑使用智能体”。Peleg 曾自定义 fork 版本,但现在无需修改源码,就能通过 Agent SDK 高度定制。即便早期仅用较弱的 Sonnet 模型,这个框架已不可或缺;如今配 Opus,更是“无人能及”。 · 优势:支持“发送并遗忘”式任务(如后台运行复杂脚本),并处理多子智能体并行。 · 痛点:仍有 bug,例如多子智能体高负载时可能耗尽 256GB RAM,导致系统崩溃。 · 与其他工具对比:Opus + Claude Code 像“全能助手”,而 Codex 或 Gemini 更像“专业编码器”。作者不care Claude Code 非开源,因为灵活性已足够。 Peleg 的 AI 编码工作流 附带实用分享:Peleg 在 2025 年已将 LLM 融入日常,但仍主导核心: · 高难度任务:全手动(更快、更准),不委托。 · 中低难度:拆解为小模块委托智能体,严格监督/编辑输出。AI 生成的代码风格已大幅改善,许多“一次性 hack” 无需改动。 · 规划与架构:始终自控,不外包。 Peleg 从“一年零使用”转向“每日依赖”,但强调“微观管理”而非随意“vibe coding”。 为什么重要? 这也反映了 2025 年 AI 生态的微妙转变:模型竞争白热化,但智能体框架(如 Claude Code)才是生产力“倍增器”。它反映开发者从“工具使用者”向“智能体协作者”的演进,尤其在编码/自动化领域。

邵猛,中年失业程序员 😂 专注 - Context Engineering, AI Agents. 分享 - AI papers, apps and OSS. ex Microsoft MVP 合作 - 私信/邮箱:shaomeng@outlook.com 📢 公众号/小红书: AI 启蒙小伙伴

avatar for meng shao
meng shao
Fri Nov 28 00:24:54
I have been following Takuya for a loooong time, even before my time building DevUtils. He's one of my indie role models.

I have been following Takuya for a loooong time, even before my time building DevUtils. He's one of my indie role models.

Recommended reading for indie hackers: https://t.co/CWy3rsUuey

avatar for Tony Dinh 🎯
Tony Dinh 🎯
Fri Nov 28 00:21:33
RT @marclou: I onboarded customers with a lot of visitors (200+ per minute), which made the real-time globe crash their browser 😅

So I shi…

RT @marclou: I onboarded customers with a lot of visitors (200+ per minute), which made the real-time globe crash their browser 😅 So I shi…

🧑‍💻 https://t.co/Y30jsaHwz9 $20K/m ⚡️ https://t.co/vatLDmi9UG $17K/m 📈 https://t.co/3EDxln5mdi $16K/m ⭐️ https://t.co/MZc8tG9xWi $8K/m 🧬 https://t.co/SfrVXVtmdA $.5K/m 🍜 https://t.co/r07EpGSYJ2 $0K/m 🧾 https://t.co/7olaOzV8Xd $0/m +18 https://t.co/4zCWHGJp1S

avatar for Marc Lou
Marc Lou
Fri Nov 28 00:21:23
🔥 百万流量密码?分享我的自媒体工作流
自动化发推 + 自研 AI 搜索插件,打造推特第二大脑

昨晚开箱体验了一下 @Yangyixxxx 老师在做的 xAIcreator
效果不错,非常看好 AI 写作+多账号同步这个方向
之前我还加入了产品围观群,不到两个月产品上线
大家快来体验一波~
https://t.co/d20Fv2qdGv

🔥 百万流量密码?分享我的自媒体工作流 自动化发推 + 自研 AI 搜索插件,打造推特第二大脑 昨晚开箱体验了一下 @Yangyixxxx 老师在做的 xAIcreator 效果不错,非常看好 AI 写作+多账号同步这个方向 之前我还加入了产品围观群,不到两个月产品上线 大家快来体验一波~ https://t.co/d20Fv2qdGv

🚧 building https://t.co/AJfZ3LMlgq https://t.co/606cFUoda3 https://t.co/s0m0tpQMDH https://t.co/UQ5vrrYdAG 🐣learning/earning while helping others ❤️making software, storytelling videos 🔙alibaba @thoughtworks

avatar for 吕立青_JimmyLv (闭关ing) 2𐃏25
吕立青_JimmyLv (闭关ing) 2𐃏25
Fri Nov 28 00:21:05
All credits to Artificial Analysis for doing this benchmarking but I am really apprehensive of the timing of this. And I am really not in favor of releasing half-baked results even if it comes with a disclaimer that "we'll be further updating these results as more optimizations go in".

This looks more like the continued attempt to assuage concerns about TPUs eating Nvidia's share - which is just panic fueled by "AI experts", the same experts who will give you a 1000-page cheat-sheet to use AI agents to make a 7-figure ARR business over the Thanksgiving weekend.

Either way - half-baked results with disclaimers are only useful when you know the audience is going to spend time reading and understanding the results. Not when there are AI doomers lurking everywhere. 

And more importantly - there's certainly a step difference between the stack Google would internally use for their TPU-runs vs what is available to the community today. That's why the CUDA moat exists in the first place - the maturity of the CUDA software stack is a generation ahead of anything out there.

Bye and Happy Thanksgiving. Time to eat some potatoes.

All credits to Artificial Analysis for doing this benchmarking but I am really apprehensive of the timing of this. And I am really not in favor of releasing half-baked results even if it comes with a disclaimer that "we'll be further updating these results as more optimizations go in". This looks more like the continued attempt to assuage concerns about TPUs eating Nvidia's share - which is just panic fueled by "AI experts", the same experts who will give you a 1000-page cheat-sheet to use AI agents to make a 7-figure ARR business over the Thanksgiving weekend. Either way - half-baked results with disclaimers are only useful when you know the audience is going to spend time reading and understanding the results. Not when there are AI doomers lurking everywhere. And more importantly - there's certainly a step difference between the stack Google would internally use for their TPU-runs vs what is available to the community today. That's why the CUDA moat exists in the first place - the maturity of the CUDA software stack is a generation ahead of anything out there. Bye and Happy Thanksgiving. Time to eat some potatoes.

Good model @xAI | prev. d-matrix, @Google. I am Speculating. You Decode. Opinions my own

avatar for Gaurav
Gaurav
Fri Nov 28 00:13:56
If you are in Da Nang, come join Hackanang and BUILD together!

Ping @dayonefoundry or @afonsocrg to know this week location 📍

If you are in Da Nang, come join Hackanang and BUILD together! Ping @dayonefoundry or @afonsocrg to know this week location 📍

Creating software I love to use. 🧠 https://t.co/p4T2vFZoJ1 $137K/m 🧰 https://t.co/y0Lq4RQRsu $5K/m 📕 https://t.co/btuasMBHPT $518/m 🖼️ https://t.co/KfFdieGrVf $50/m

avatar for Tony Dinh 🎯
Tony Dinh 🎯
Fri Nov 28 00:11:47
  • Previous
  • 1
  • More pages
  • 2167
  • 2168
  • 2169
  • More pages
  • 5634
  • Next