LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Introducing Solito 5:

A simplified way to share code between Next.js and React Native.

→ Pure Next.js on Web
→ react-native-web dependency dropped
→ Next.js 16 support
→ Expo 54 starter monorepo

yarn add solito

Introducing Solito 5: A simplified way to share code between Next.js and React Native. → Pure Next.js on Web → react-native-web dependency dropped → Next.js 16 support → Expo 54 starter monorepo yarn add solito

Release notes → https://t.co/MUiGFLwjb4 (I wrote these on a plane without WiFi or AI.)

avatar for Fernando Rojo
Fernando Rojo
Tue Oct 21 23:56:10
Thread

Thread

https://t.co/bwoaClwLPc contact: beetlemoses@gmail.com

avatar for beetle moses
beetle moses
Tue Oct 21 20:10:30
This was one of the best decisions I've made for my startup.

Users reach out whenever they’re stuck on the onboarding. This is gold.

Every time someone asked a question, I wrote a guide. Then I linked it inside the product, right where people usually get stuck.

Some of those guides now rank on Google, and AI assistants like ChatGPT surface them too.

This was one of the best decisions I've made for my startup. Users reach out whenever they’re stuck on the onboarding. This is gold. Every time someone asked a question, I wrote a guide. Then I linked it inside the product, right where people usually get stuck. Some of those guides now rank on Google, and AI assistants like ChatGPT surface them too.

@crisp_chat ❤️

avatar for Marc Lou
Marc Lou
Tue Oct 21 15:17:27
The ideal CFB post season format (a thread):

8 team playoff. All first rounds are on campus games. Semis and finals are NY6 bowls. The 6 are on rotation in and out

12 teams is just too many. Diluting the value of the regular season to add less competitive opening round games.

The ideal CFB post season format (a thread): 8 team playoff. All first rounds are on campus games. Semis and finals are NY6 bowls. The 6 are on rotation in and out 12 teams is just too many. Diluting the value of the regular season to add less competitive opening round games.

Also every team should have the same amount of off time (not possible with 12 team format). I believe getting a higher seed could be a disadvantage due to not getting home field and having a team that’s played a game much more recently than you (e.g. OSU compared to Oregon ‘24)

avatar for College Football Playoff Talk
College Football Playoff Talk
Tue Oct 21 03:38:14
Announcing our beta launch: X API pay-per-use model.

We are expanding a closed beta to both new & power users who want to ship amazing apps on X. 

All selected users will receive a $500 voucher to build with the X API. 🤑💻🚀

Announcing our beta launch: X API pay-per-use model. We are expanding a closed beta to both new & power users who want to ship amazing apps on X. All selected users will receive a $500 voucher to build with the X API. 🤑💻🚀

Our top focus is to enable builders by opening up our developer platform. We will also roll out a brand new developer experience with a revamped Dev Console. Those selected will be the first to test it out. 🔥

avatar for Developers
Developers
Mon Oct 20 23:13:55
🚨 DeepSeek just did something wild.

They built an OCR system that compresses long text into vision tokens  literally turning paragraphs into pixels.

Their model, DeepSeek-OCR, achieves 97% decoding precision at 10× compression and still manages 60% accuracy even at 20×. That means one image can represent entire documents using a fraction of the tokens an LLM would need.

Even crazier? It beats GOT-OCR2.0 and MinerU2.0 while using up to 60× fewer tokens and can process 200K+ pages/day on a single A100.

This could solve one of AI’s biggest problems: long-context inefficiency.
Instead of paying more for longer sequences, models might soon see text instead of reading it.

The future of context compression might not be textual at all.
It might be optical 👁️

github. com/deepseek-ai/DeepSeek-OCR

🚨 DeepSeek just did something wild. They built an OCR system that compresses long text into vision tokens literally turning paragraphs into pixels. Their model, DeepSeek-OCR, achieves 97% decoding precision at 10× compression and still manages 60% accuracy even at 20×. That means one image can represent entire documents using a fraction of the tokens an LLM would need. Even crazier? It beats GOT-OCR2.0 and MinerU2.0 while using up to 60× fewer tokens and can process 200K+ pages/day on a single A100. This could solve one of AI’s biggest problems: long-context inefficiency. Instead of paying more for longer sequences, models might soon see text instead of reading it. The future of context compression might not be textual at all. It might be optical 👁️ github. com/deepseek-ai/DeepSeek-OCR

1. Vision-Text Compression: The Core Idea LLMs struggle with long documents because token usage scales quadratically with length. DeepSeek-OCR flips that: instead of reading text, it encodes full documents as vision tokens each token representing a compressed piece of visual information. Result: You can fit 10 pages worth of text into the same token budget it takes to process 1 page in GPT-4.

avatar for God of Prompt
God of Prompt
Mon Oct 20 11:22:11
  • Previous
  • 1
  • More pages
  • 2184
  • 2185
  • 2186
  • More pages
  • 2235
  • Next