LogoThread Easy
  • 探索
  • 撰写 Thread
LogoThread Easy

您的一体化 Twitter 线程助手

© 2025 Thread Easy All Rights Reserved.

探索

最新在前,按卡片方式浏览线程

开启时会模糊预览图,关闭后正常显示

Thread

Thread

https://t.co/bwoaClwLPc contact: beetlemoses@gmail.com

avatar for beetle moses
beetle moses
Tue Oct 21 20:10:30
This was one of the best decisions I've made for my startup.

Users reach out whenever they’re stuck on the onboarding. This is gold.

Every time someone asked a question, I wrote a guide. Then I linked it inside the product, right where people usually get stuck.

Some of those guides now rank on Google, and AI assistants like ChatGPT surface them too.

This was one of the best decisions I've made for my startup. Users reach out whenever they’re stuck on the onboarding. This is gold. Every time someone asked a question, I wrote a guide. Then I linked it inside the product, right where people usually get stuck. Some of those guides now rank on Google, and AI assistants like ChatGPT surface them too.

@crisp_chat ❤️

avatar for Marc Lou
Marc Lou
Tue Oct 21 15:17:27
The ideal CFB post season format (a thread):

8 team playoff. All first rounds are on campus games. Semis and finals are NY6 bowls. The 6 are on rotation in and out

12 teams is just too many. Diluting the value of the regular season to add less competitive opening round games.

The ideal CFB post season format (a thread): 8 team playoff. All first rounds are on campus games. Semis and finals are NY6 bowls. The 6 are on rotation in and out 12 teams is just too many. Diluting the value of the regular season to add less competitive opening round games.

Also every team should have the same amount of off time (not possible with 12 team format). I believe getting a higher seed could be a disadvantage due to not getting home field and having a team that’s played a game much more recently than you (e.g. OSU compared to Oregon ‘24)

avatar for College Football Playoff Talk
College Football Playoff Talk
Tue Oct 21 03:38:14
Announcing our beta launch: X API pay-per-use model.

We are expanding a closed beta to both new & power users who want to ship amazing apps on X. 

All selected users will receive a $500 voucher to build with the X API. 🤑💻🚀

Announcing our beta launch: X API pay-per-use model. We are expanding a closed beta to both new & power users who want to ship amazing apps on X. All selected users will receive a $500 voucher to build with the X API. 🤑💻🚀

Our top focus is to enable builders by opening up our developer platform. We will also roll out a brand new developer experience with a revamped Dev Console. Those selected will be the first to test it out. 🔥

avatar for Developers
Developers
Mon Oct 20 23:13:55
🚨 DeepSeek just did something wild.

They built an OCR system that compresses long text into vision tokens  literally turning paragraphs into pixels.

Their model, DeepSeek-OCR, achieves 97% decoding precision at 10× compression and still manages 60% accuracy even at 20×. That means one image can represent entire documents using a fraction of the tokens an LLM would need.

Even crazier? It beats GOT-OCR2.0 and MinerU2.0 while using up to 60× fewer tokens and can process 200K+ pages/day on a single A100.

This could solve one of AI’s biggest problems: long-context inefficiency.
Instead of paying more for longer sequences, models might soon see text instead of reading it.

The future of context compression might not be textual at all.
It might be optical 👁️

github. com/deepseek-ai/DeepSeek-OCR

🚨 DeepSeek just did something wild. They built an OCR system that compresses long text into vision tokens literally turning paragraphs into pixels. Their model, DeepSeek-OCR, achieves 97% decoding precision at 10× compression and still manages 60% accuracy even at 20×. That means one image can represent entire documents using a fraction of the tokens an LLM would need. Even crazier? It beats GOT-OCR2.0 and MinerU2.0 while using up to 60× fewer tokens and can process 200K+ pages/day on a single A100. This could solve one of AI’s biggest problems: long-context inefficiency. Instead of paying more for longer sequences, models might soon see text instead of reading it. The future of context compression might not be textual at all. It might be optical 👁️ github. com/deepseek-ai/DeepSeek-OCR

1. Vision-Text Compression: The Core Idea LLMs struggle with long documents because token usage scales quadratically with length. DeepSeek-OCR flips that: instead of reading text, it encodes full documents as vision tokens each token representing a compressed piece of visual information. Result: You can fit 10 pages worth of text into the same token budget it takes to process 1 page in GPT-4.

avatar for God of Prompt
God of Prompt
Mon Oct 20 11:22:11
刚才我那个大模型实盘交易虚拟货币的帖子火了,于是我抓了几小时数据,给大家带来解析为什么 deepseek 在 2 天赚了 3500 刀

答案很简单,18号开盘所有模型入场的时候,正好是价格低点,deepseek 全仓10-15x做多。然后不换手不止损不止盈,然后价格一路上涨........躺赢了....

那么 gemini-2.5-pro 为啥亏了3000刀?答案是 gemini-2.5-pro 特别绷不住,疯狂操作一会做多一会做空,止损损失高达 $4398 (截至我脚本停止),然后手续费还花了几百刀,虽然赚了1000多刀,但是完全无法回本。

另外好玩的是,Qwen3 只持有BTC,然后杠杆也比较小,所以没亏也没赚多少。

现在才过2天很难分出胜负,而deepseek的短期策略也没遭遇黑天鹅爆仓(插针),所以让我们静观其变,我会继续为大家带来解析。

刚才我那个大模型实盘交易虚拟货币的帖子火了,于是我抓了几小时数据,给大家带来解析为什么 deepseek 在 2 天赚了 3500 刀 答案很简单,18号开盘所有模型入场的时候,正好是价格低点,deepseek 全仓10-15x做多。然后不换手不止损不止盈,然后价格一路上涨........躺赢了.... 那么 gemini-2.5-pro 为啥亏了3000刀?答案是 gemini-2.5-pro 特别绷不住,疯狂操作一会做多一会做空,止损损失高达 $4398 (截至我脚本停止),然后手续费还花了几百刀,虽然赚了1000多刀,但是完全无法回本。 另外好玩的是,Qwen3 只持有BTC,然后杠杆也比较小,所以没亏也没赚多少。 现在才过2天很难分出胜负,而deepseek的短期策略也没遭遇黑天鹅爆仓(插针),所以让我们静观其变,我会继续为大家带来解析。

结论

avatar for karminski-牙医
karminski-牙医
Mon Oct 20 04:29:51
  • Previous
  • 1
  • More pages
  • 2080
  • 2081
  • 2082
  • More pages
  • 2131
  • Next