LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Looking at @marclou TrustMRR site and seeing all the huge revenue of solopreneurs.

Looking at @marclou TrustMRR site and seeing all the huge revenue of solopreneurs.

Building digital assets Founder of: 💸 https://t.co/25luMOeurc 📧 https://t.co/R5XVHCie1Z 🔎 https://t.co/zQs9wdGD4c 🏟️ https://t.co/2YDuZxDKM8

avatar for Tobi Hikari
Tobi Hikari
Fri Nov 28 11:41:38
Nothing new here, just a quick case example of using AI to R&D.

(GPT-5.1 vs Opus 4.5)

For a context:

We have 2 versions of HVM capable of running SupGen:
→ HVM3: used to develop it, hits 160m interactions/s
→ HVM4: polished version, hits 130m interactions/s

That is, the new version is more modern, but slightly slower, since we didn't optimize it yet.

Yesterday, I launched 2 coding agents: Opus 4.5 (ultrathink) and GPT-5.1-codex-max (xhigh), and asked them to optimize the new HVM4 as much they cold.

Result: hours later, they completely failed.
Not even +1%.

I then asked them to keep trying.
They failed again. And again. For hours.

At some point, they had just given up.
They refused to even keep trying.

GPT-5 wrote:

> I've tried multiple structural and low-level changes aimed at cutting memory traffic and boosting throughput, but each attempt either broke the build, regressed performance, or failed to improve beyond the ~120 M itrs/s baseline.

> Given the fixed clang -03 constraint and the memory-bound nature of this workload, I don't currently have a viable change that safely pushes to 140 M itrs/s. Continuing to "just keep trying" is likely to produce more regressions rather than real gains.

So, I tried something different: this time, I copy/pasted the old HVM3 dir into HVM4, and wrote:

These are the old and new HVM implementations. The old one contains some optimizations that the new one didn't implement yet. Your goal is to understand the differences and port ALL optimizations from the old one, into the new architecture.

Sent that to Opus.
10 minutes later, I checked the terminal.

"190m interactions per second"

That was... pretty a happy sight, since it is an absolute record for this benchmark. We've never seen anything close to that in a single core CPU.

This reinforces my perception on the state of LLMs:
→ They are extremely good a coding.
→ They are extremely bad at innovation.

Both models were utterly incapable of coming up with the the ideas that we did, but, once injected with the solution, they're extremely competent at implementing it, reading and writing lots of code, which saves a lot of time. The most important optimizations from HVM3 are now up on the new architecture, reaching a new record, and I didn't have to code anything at all. I just had to have the idea to do this, and it worked like a charm.

For the record, I've stopped using Gemini 3 completely. I think it is the smartest model in the world, but it isn't really suitable for coding due to bad instruction following, lots of connection errors and lag, and Gemini CLI performing poorly. GPT-5.1-codex-max is nice-ish but it is slow and I'm yet to see it outperforming Opus 4.5, which is my model for everything again. I love how consistent Claude models always were for coding, and I'm so glad to have one that is actually smart too.

Nothing new here, just a quick case example of using AI to R&D. (GPT-5.1 vs Opus 4.5) For a context: We have 2 versions of HVM capable of running SupGen: → HVM3: used to develop it, hits 160m interactions/s → HVM4: polished version, hits 130m interactions/s That is, the new version is more modern, but slightly slower, since we didn't optimize it yet. Yesterday, I launched 2 coding agents: Opus 4.5 (ultrathink) and GPT-5.1-codex-max (xhigh), and asked them to optimize the new HVM4 as much they cold. Result: hours later, they completely failed. Not even +1%. I then asked them to keep trying. They failed again. And again. For hours. At some point, they had just given up. They refused to even keep trying. GPT-5 wrote: > I've tried multiple structural and low-level changes aimed at cutting memory traffic and boosting throughput, but each attempt either broke the build, regressed performance, or failed to improve beyond the ~120 M itrs/s baseline. > Given the fixed clang -03 constraint and the memory-bound nature of this workload, I don't currently have a viable change that safely pushes to 140 M itrs/s. Continuing to "just keep trying" is likely to produce more regressions rather than real gains. So, I tried something different: this time, I copy/pasted the old HVM3 dir into HVM4, and wrote: These are the old and new HVM implementations. The old one contains some optimizations that the new one didn't implement yet. Your goal is to understand the differences and port ALL optimizations from the old one, into the new architecture. Sent that to Opus. 10 minutes later, I checked the terminal. "190m interactions per second" That was... pretty a happy sight, since it is an absolute record for this benchmark. We've never seen anything close to that in a single core CPU. This reinforces my perception on the state of LLMs: → They are extremely good a coding. → They are extremely bad at innovation. Both models were utterly incapable of coming up with the the ideas that we did, but, once injected with the solution, they're extremely competent at implementing it, reading and writing lots of code, which saves a lot of time. The most important optimizations from HVM3 are now up on the new architecture, reaching a new record, and I didn't have to code anything at all. I just had to have the idea to do this, and it worked like a charm. For the record, I've stopped using Gemini 3 completely. I think it is the smartest model in the world, but it isn't really suitable for coding due to bad instruction following, lots of connection errors and lag, and Gemini CLI performing poorly. GPT-5.1-codex-max is nice-ish but it is slow and I'm yet to see it outperforming Opus 4.5, which is my model for everything again. I love how consistent Claude models always were for coding, and I'm so glad to have one that is actually smart too.

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Fri Nov 28 11:39:54
MkDollar是一个工具网站,网站的出发点是为了解决独立开发者或者做产品的朋友在出海搞钱过程中遇到的一些麻烦的问题和重复性的工作,帮助用户快速挣到第1笔美刀!

MkDollar是基于MkSaaS模板花了几天时间vibe coding出来的,可能会遇到一些小问题,后续逐步优化,同时也在开发一些新的更实用的功能。

MkDollar是一个工具网站,网站的出发点是为了解决独立开发者或者做产品的朋友在出海搞钱过程中遇到的一些麻烦的问题和重复性的工作,帮助用户快速挣到第1笔美刀! MkDollar是基于MkSaaS模板花了几天时间vibe coding出来的,可能会遇到一些小问题,后续逐步优化,同时也在开发一些新的更实用的功能。

🔥 The best AI SaaS boilerplate - https://t.co/VyNtTs0jSX 🚀 The best directory boilerplate with AI - https://t.co/wEvJ1Dd8aR 🎉 https://t.co/bh1RxeERuY & https://t.co/zubXJCoY92 & https://t.co/tfQf8T7gGF

avatar for Fox@MkSaaS.com
Fox@MkSaaS.com
Fri Nov 28 11:37:46
功能4:如果你手里有个产品就是导航站,那么也可以在后台免费申请提交成为外链列表中的导航站之一,这样的话既增加了产品曝光,又可以提高导航站的收入。目前需满足条件是DR>=30,如果还不够,请先提升DR。

限时免费申请,这个功能将来可能会收费。

功能4:如果你手里有个产品就是导航站,那么也可以在后台免费申请提交成为外链列表中的导航站之一,这样的话既增加了产品曝光,又可以提高导航站的收入。目前需满足条件是DR>=30,如果还不够,请先提升DR。 限时免费申请,这个功能将来可能会收费。

MkDollar是一个工具网站,网站的出发点是为了解决独立开发者或者做产品的朋友在出海搞钱过程中遇到的一些麻烦的问题和重复性的工作,帮助用户快速挣到第1笔美刀! MkDollar是基于MkSaaS模板花了几天时间vibe coding出来的,可能会遇到一些小问题,后续逐步优化,同时也在开发一些新的更实用的功能。

avatar for Fox@MkSaaS.com
Fox@MkSaaS.com
Fri Nov 28 11:37:42
功能2:有个专门的活动产品促销页面,不定期发布搞促销活动的产品列表和它们的优惠信息。

https://t.co/FIXPwPoHPz

刚好最近是黑色星期五,大家有产品搞黑五活动的都可以来提交,免费。将来还会新增其他节日活动,别错过。

功能2:有个专门的活动产品促销页面,不定期发布搞促销活动的产品列表和它们的优惠信息。 https://t.co/FIXPwPoHPz 刚好最近是黑色星期五,大家有产品搞黑五活动的都可以来提交,免费。将来还会新增其他节日活动,别错过。

功能3:有个页面收录了 240 多个可以提交产品的外链列表,你可以免费获取完整列表。你甚至可以免费在后台管理你的产品的外链提交状态,方便随时随地跟踪记录产品的外链提交情况。 https://t.co/9xuEFmTMIR 如果外链信息有误,可以私信反馈,或者进入discord群反馈

avatar for Fox@MkSaaS.com
Fox@MkSaaS.com
Fri Nov 28 11:37:33
大家感恩节快乐,MkDollar网站今天内测上线啦!

网址:https://t.co/QzhUwYuD1k

MkDollar的含义就是挣美刀,目前已完成的功能有:

功能1:用户可以在后台提交自己的产品,查看产品的DR值、流量趋势和分布,将来会支持监控DR变更发送通知等功能。

功能免费,提交只要输入域名,信息抓取纯自动化。

大家感恩节快乐,MkDollar网站今天内测上线啦! 网址:https://t.co/QzhUwYuD1k MkDollar的含义就是挣美刀,目前已完成的功能有: 功能1:用户可以在后台提交自己的产品,查看产品的DR值、流量趋势和分布,将来会支持监控DR变更发送通知等功能。 功能免费,提交只要输入域名,信息抓取纯自动化。

🔥 The best AI SaaS boilerplate - https://t.co/VyNtTs0jSX 🚀 The best directory boilerplate with AI - https://t.co/wEvJ1Dd8aR 🎉 https://t.co/bh1RxeERuY & https://t.co/zubXJCoY92 & https://t.co/tfQf8T7gGF

avatar for Fox@MkSaaS.com
Fox@MkSaaS.com
Fri Nov 28 11:37:29
  • Previous
  • 1
  • More pages
  • 2124
  • 2125
  • 2126
  • More pages
  • 5634
  • Next