Keep on to blur preview images; turn off to show them clearly

1. Vision-Text Compression: The Core Idea LLMs struggle with long documents because token usage scales quadratically with length. DeepSeek-OCR flips that: instead of reading text, it encodes full documents as vision tokens each token representing a compressed piece of visual information. Result: You can fit 10 pages worth of text into the same token budget it takes to process 1 page in GPT-4.


结论


全栈创业者: - https://t.co/MNlf5lc1G3 - https://t.co/KZEK3kuwNU - https://t.co/0ilSrNfWRI 主打陪伴的出海陪跑师:@chuhaiqu 发行了 4 张专辑的 AI 音乐练习生:@LuoSuno

Credit to @dinkin_flickaa and @misha_mityushk for building it and @nicoduc for the designs. It's only version 1, so please share any bugs you find.


The above estimate is optimistic. If we have a rough terrain, we end up having two bricks on top of each other in most places. Thus we have 64MB worth of leaf bricks. SVO/DAG upper levels don't increase much (as we use shared child pointers). Total is <70MB. Still a win.


Gold is a physical commodity that has a cost of carry, but it's not volatile. With few exceptions, the futures term structure is very stable, and usually the exceptions are due to exogenous shocks like I think COVID (if I remember correctly) and Trump's tariff stuff.
