LogoThread Easy
  • Explorer
  • Composer un thread
LogoThread Easy

Votre partenaire tout-en-un pour les threads Twitter

© 2025 Thread Easy All Rights Reserved.

Explorer

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Announcing our beta launch: X API pay-per-use model.

We are expanding a closed beta to both new & power users who want to ship amazing apps on X. 

All selected users will receive a $500 voucher to build with the X API. 🤑💻🚀

Announcing our beta launch: X API pay-per-use model. We are expanding a closed beta to both new & power users who want to ship amazing apps on X. All selected users will receive a $500 voucher to build with the X API. 🤑💻🚀

Our top focus is to enable builders by opening up our developer platform. We will also roll out a brand new developer experience with a revamped Dev Console. Those selected will be the first to test it out. 🔥

avatar for Developers
Developers
Mon Oct 20 23:13:55
🚨 DeepSeek just did something wild.

They built an OCR system that compresses long text into vision tokens  literally turning paragraphs into pixels.

Their model, DeepSeek-OCR, achieves 97% decoding precision at 10× compression and still manages 60% accuracy even at 20×. That means one image can represent entire documents using a fraction of the tokens an LLM would need.

Even crazier? It beats GOT-OCR2.0 and MinerU2.0 while using up to 60× fewer tokens and can process 200K+ pages/day on a single A100.

This could solve one of AI’s biggest problems: long-context inefficiency.
Instead of paying more for longer sequences, models might soon see text instead of reading it.

The future of context compression might not be textual at all.
It might be optical 👁️

github. com/deepseek-ai/DeepSeek-OCR

🚨 DeepSeek just did something wild. They built an OCR system that compresses long text into vision tokens literally turning paragraphs into pixels. Their model, DeepSeek-OCR, achieves 97% decoding precision at 10× compression and still manages 60% accuracy even at 20×. That means one image can represent entire documents using a fraction of the tokens an LLM would need. Even crazier? It beats GOT-OCR2.0 and MinerU2.0 while using up to 60× fewer tokens and can process 200K+ pages/day on a single A100. This could solve one of AI’s biggest problems: long-context inefficiency. Instead of paying more for longer sequences, models might soon see text instead of reading it. The future of context compression might not be textual at all. It might be optical 👁️ github. com/deepseek-ai/DeepSeek-OCR

1. Vision-Text Compression: The Core Idea LLMs struggle with long documents because token usage scales quadratically with length. DeepSeek-OCR flips that: instead of reading text, it encodes full documents as vision tokens each token representing a compressed piece of visual information. Result: You can fit 10 pages worth of text into the same token budget it takes to process 1 page in GPT-4.

avatar for God of Prompt
God of Prompt
Mon Oct 20 11:22:11
刚才我那个大模型实盘交易虚拟货币的帖子火了,于是我抓了几小时数据,给大家带来解析为什么 deepseek 在 2 天赚了 3500 刀

答案很简单,18号开盘所有模型入场的时候,正好是价格低点,deepseek 全仓10-15x做多。然后不换手不止损不止盈,然后价格一路上涨........躺赢了....

那么 gemini-2.5-pro 为啥亏了3000刀?答案是 gemini-2.5-pro 特别绷不住,疯狂操作一会做多一会做空,止损损失高达 $4398 (截至我脚本停止),然后手续费还花了几百刀,虽然赚了1000多刀,但是完全无法回本。

另外好玩的是,Qwen3 只持有BTC,然后杠杆也比较小,所以没亏也没赚多少。

现在才过2天很难分出胜负,而deepseek的短期策略也没遭遇黑天鹅爆仓(插针),所以让我们静观其变,我会继续为大家带来解析。

刚才我那个大模型实盘交易虚拟货币的帖子火了,于是我抓了几小时数据,给大家带来解析为什么 deepseek 在 2 天赚了 3500 刀 答案很简单,18号开盘所有模型入场的时候,正好是价格低点,deepseek 全仓10-15x做多。然后不换手不止损不止盈,然后价格一路上涨........躺赢了.... 那么 gemini-2.5-pro 为啥亏了3000刀?答案是 gemini-2.5-pro 特别绷不住,疯狂操作一会做多一会做空,止损损失高达 $4398 (截至我脚本停止),然后手续费还花了几百刀,虽然赚了1000多刀,但是完全无法回本。 另外好玩的是,Qwen3 只持有BTC,然后杠杆也比较小,所以没亏也没赚多少。 现在才过2天很难分出胜负,而deepseek的短期策略也没遭遇黑天鹅爆仓(插针),所以让我们静观其变,我会继续为大家带来解析。

结论

avatar for karminski-牙医
karminski-牙医
Mon Oct 20 04:29:51
魔功已练成,兄弟们!

新增一个海外 62 万人私域社群(AI 方向,异常火爆,每周十几万新增可看截图),另有唯美、鸡汤格言、宗教、动漫、meme、语言学习等主题的超 3000 万海外社群流量(纯自持)。

欢迎出海做 c 端业务的朋友来聊!

魔功已练成,兄弟们! 新增一个海外 62 万人私域社群(AI 方向,异常火爆,每周十几万新增可看截图),另有唯美、鸡汤格言、宗教、动漫、meme、语言学习等主题的超 3000 万海外社群流量(纯自持)。 欢迎出海做 c 端业务的朋友来聊!

全栈创业者: - https://t.co/MNlf5lc1G3 - https://t.co/KZEK3kuwNU - https://t.co/0ilSrNfWRI 主打陪伴的出海陪跑师:@chuhaiqu 发行了 4 张专辑的 AI 音乐练习生:@LuoSuno

avatar for Luo说不啰嗦
Luo说不啰嗦
Mon Oct 20 02:00:09
We're testing a new link experience, starting on iOS -- to make it easier for your followers to engage with your post while browsing links.

For creators, a common complaint is that posts with links tend to get lower reach. This is because the web browser covers the post and people forget to Like or Reply. So X doesn't get a clear signal whether the content is any good.

To help get better signal, posts will now collapse to the bottom of the page so people can react while you're reading.

As always, remember: the post should stand alone as great content so write a solid caption.

We're testing a new link experience, starting on iOS -- to make it easier for your followers to engage with your post while browsing links. For creators, a common complaint is that posts with links tend to get lower reach. This is because the web browser covers the post and people forget to Like or Reply. So X doesn't get a clear signal whether the content is any good. To help get better signal, posts will now collapse to the bottom of the page so people can react while you're reading. As always, remember: the post should stand alone as great content so write a solid caption.

Credit to @dinkin_flickaa and @misha_mityushk for building it and @nicoduc for the designs. It's only version 1, so please share any bugs you find.

avatar for Nikita Bier
Nikita Bier
Sun Oct 19 19:33:03
People often think voxels take a lot of storage. Let's compare a smooth terrain. Height map vs voxels.  

Height map: 16bit 8192x8192 = 128MB  

Voxels: 4x4x4 brick = uin64 = 8 bytes. We need 2048x2048 bricks to cover the 8k^2 terrain surface = 32MB. SVO/DAG upper levels add <10%

People often think voxels take a lot of storage. Let's compare a smooth terrain. Height map vs voxels. Height map: 16bit 8192x8192 = 128MB Voxels: 4x4x4 brick = uin64 = 8 bytes. We need 2048x2048 bricks to cover the 8k^2 terrain surface = 32MB. SVO/DAG upper levels add <10%

The above estimate is optimistic. If we have a rough terrain, we end up having two bricks on top of each other in most places. Thus we have 64MB worth of leaf bricks. SVO/DAG upper levels don't increase much (as we use shared child pointers). Total is <70MB. Still a win.

avatar for Sebastian Aaltonen
Sebastian Aaltonen
Sat Oct 18 09:17:39
  • Previous
  • 1
  • More pages
  • 2067
  • 2068
  • 2069
  • More pages
  • 2117
  • Next