LogoThread Easy
  • 探索
  • 撰写 Thread
LogoThread Easy

您的一体化 Twitter 线程助手

© 2025 Thread Easy All Rights Reserved.

探索

最新在前,按卡片方式浏览线程

开启时会模糊预览图,关闭后正常显示

Edmund (@edmundtian) gave an absolute masterclass on storytelling today at the @HackerResidency

Easily one the best & most useful talks I’ve ever been to

I have so many ideas for Vibingbase landing page & positioning now

Edmund (@edmundtian) gave an absolute masterclass on storytelling today at the @HackerResidency Easily one the best & most useful talks I’ve ever been to I have so many ideas for Vibingbase landing page & positioning now

Built 8 startups in 12 months • Sold 3/8 startups • Building https://t.co/1tAr7flVT2, https://t.co/pL2Vv5UTCK, https://t.co/sYr4JzzOtJ, https://t.co/HvvzdvWg5H, https://t.co/gtanzcV2Gs and a secret startup 🤫

avatar for Minh-Phuc Tran
Minh-Phuc Tran
Mon Nov 03 13:34:07
Thank you https://t.co/xiEv62bAMy by @ThibaultJaigu 🤗

Thank you https://t.co/xiEv62bAMy by @ThibaultJaigu 🤗

🧑‍💻 https://t.co/Y30jsaHwz9 $20K/m ⚡️ https://t.co/vatLDmi9UG $17K/m 📈 https://t.co/3EDxln5mdi $16K/m ⭐️ https://t.co/MZc8tG9xWi $8K/m 🧬 https://t.co/SfrVXVtmdA $.5K/m 🍜 https://t.co/r07EpGSYJ2 $0K/m 🧾 https://t.co/7olaOzV8Xd $0/m +18 https://t.co/4zCWHGJp1S

avatar for Marc Lou
Marc Lou
Mon Nov 03 13:33:51
2. write to remember

2. write to remember

intercomputer realist @a16zcrypto / editor https://t.co/fGE7XDfsKo / cofounder @FortuneCrypto

avatar for Robert Hackett
Robert Hackett
Mon Nov 03 13:31:35
"there's nothing interesting on arxiv these days!"  
- the words of an uncurious mind

i have personally been blown away by the volume of interesting papers posted over the last few months, and eagerly following daily digests

here are some papers i enjoyed the most:

- Pre-training under infinite compute (September 2025, https://t.co/3Q838oO6ei)
- Fresh in memory: Training-order recency is linearly encoded in language model activations (September 2025, https://t.co/V9qCttiFPJ)
- Subliminal Learning: Language models transmit behavioral traits via hidden signals in data (July 2025, https://t.co/eJrGChfq1d)
- Memory Limitations of Prompt Tuning in Transformers (September 2025, https://t.co/AJR17dkVUx)
- Behavioral Fingerprinting of Large Language Models (September 2025, https://t.co/ZdHMlIdcYP)
- Language Self-Play For Data-Free Training (September 2025, https://t.co/9kLvY8dNbe)
- The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs (September 2025, https://t.co/X7bwtKE8xe)
- Do Natural Language Descriptions of Model Activations Convey Privileged Information? (September 2025, https://t.co/4qjWhFJVUG)
- Beyond the Leaderboard: Understanding Performance Disparities in Large Language Models via Model Diffing (September 2025, https://t.co/2ejyGDCSVF)
- Stochastic activations (September 2025, https://t.co/1xoXmLeIiF)
- PonderLM-2: Pretraining LLM with Latent Thoughts in Continuous Space (September 2025, https://t.co/gZW50tvCIK)
- Words That Make Language Models Perceive (October 2025, https://t.co/IDQEXdeAGv)
- Language Models Do Not Embed Numbers Continuously (October 2025, https://t.co/g8Cw3yNcoV)
- Learning Facts at Scale with Active Reading (August 2025, https://t.co/aw3fE8dKiJ)
- OverFill: Two-Stage Models for Efficient Language Model Decoding (August 2025, https://t.co/Wku5FXbGEz)
- Retrieval Capabilities of Large Language Models Scale with Pretraining FLOPs (August 2025, https://t.co/TWgqTCHjuZ)
- Reasoning-Intensive Regression (August 2025, https://t.co/2G8Lxn323A)
- Watch the Weights: Unsupervised monitoring and control of fine-tuned LLMs (August 2025, https://t.co/im0qdNorNQ)
- On the Theoretical Limitations of Embedding-Based Retrieval (August 2025, https://t.co/7haVnfNpTp)

"there's nothing interesting on arxiv these days!" - the words of an uncurious mind i have personally been blown away by the volume of interesting papers posted over the last few months, and eagerly following daily digests here are some papers i enjoyed the most: - Pre-training under infinite compute (September 2025, https://t.co/3Q838oO6ei) - Fresh in memory: Training-order recency is linearly encoded in language model activations (September 2025, https://t.co/V9qCttiFPJ) - Subliminal Learning: Language models transmit behavioral traits via hidden signals in data (July 2025, https://t.co/eJrGChfq1d) - Memory Limitations of Prompt Tuning in Transformers (September 2025, https://t.co/AJR17dkVUx) - Behavioral Fingerprinting of Large Language Models (September 2025, https://t.co/ZdHMlIdcYP) - Language Self-Play For Data-Free Training (September 2025, https://t.co/9kLvY8dNbe) - The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs (September 2025, https://t.co/X7bwtKE8xe) - Do Natural Language Descriptions of Model Activations Convey Privileged Information? (September 2025, https://t.co/4qjWhFJVUG) - Beyond the Leaderboard: Understanding Performance Disparities in Large Language Models via Model Diffing (September 2025, https://t.co/2ejyGDCSVF) - Stochastic activations (September 2025, https://t.co/1xoXmLeIiF) - PonderLM-2: Pretraining LLM with Latent Thoughts in Continuous Space (September 2025, https://t.co/gZW50tvCIK) - Words That Make Language Models Perceive (October 2025, https://t.co/IDQEXdeAGv) - Language Models Do Not Embed Numbers Continuously (October 2025, https://t.co/g8Cw3yNcoV) - Learning Facts at Scale with Active Reading (August 2025, https://t.co/aw3fE8dKiJ) - OverFill: Two-Stage Models for Efficient Language Model Decoding (August 2025, https://t.co/Wku5FXbGEz) - Retrieval Capabilities of Large Language Models Scale with Pretraining FLOPs (August 2025, https://t.co/TWgqTCHjuZ) - Reasoning-Intensive Regression (August 2025, https://t.co/2G8Lxn323A) - Watch the Weights: Unsupervised monitoring and control of fine-tuned LLMs (August 2025, https://t.co/im0qdNorNQ) - On the Theoretical Limitations of Embedding-Based Retrieval (August 2025, https://t.co/7haVnfNpTp)

research @cornell // language models, information theory, science of AI

avatar for Jack Morris
Jack Morris
Mon Nov 03 13:31:25
RT @Yangzi812: 我发现的b站一位ai动画创作大佬朱雷蒙Remo Zhu老师团队创建的作品,14分钟AI科幻叙事片《傻子:壳、荣耀与幸存者》,相当惊艳,从此我们可以自己来创造“爱,死亡,与机器人”了!欢迎大家关注b站【傻子:壳、荣耀与幸存者【AI影像征集大赛-中国科…

RT @Yangzi812: 我发现的b站一位ai动画创作大佬朱雷蒙Remo Zhu老师团队创建的作品,14分钟AI科幻叙事片《傻子:壳、荣耀与幸存者》,相当惊艳,从此我们可以自己来创造“爱,死亡,与机器人”了!欢迎大家关注b站【傻子:壳、荣耀与幸存者【AI影像征集大赛-中国科…

从投资领域转到创业:找工作、找面试题、改简历、模拟面试. 创业(冷启动)|AI , AIGC | 安全技术|RAG | 时空智能 | 认知心理学|智能体 | 生命科学 | 强化学习 I built open source software at https://t.co/b69DXZhcyR

avatar for Y11
Y11
Mon Nov 03 13:31:04
yesterday i wrote about how writing clarifies thinking.

today: how it helps memory.

2. writing to remember 

our attention spans are shorter than ever. 

writing is one of the few ways to slow things down and hold onto our own thoughts. 

every note or post is a snapshot of who you were, what you noticed, and what you told yourself you cared about.

if you keep a writing practice, you will quickly realize that most of what feels urgent doesn’t matter.

it’s almost impossible to know what’s important while you’re in the middle of it… writing gives you a way to find out later.

looking back at my own journals, i wince at what i ranked highest priority (usually meeting an arbitrary deadline at the expense of being there for others). 

when you look back, you see yourself clearly… and that can change how you behave today.

we forget most of what we think, which is the determinant of how we act. writing helps us remember.

writing is memory.

yesterday i wrote about how writing clarifies thinking. today: how it helps memory. 2. writing to remember our attention spans are shorter than ever. writing is one of the few ways to slow things down and hold onto our own thoughts. every note or post is a snapshot of who you were, what you noticed, and what you told yourself you cared about. if you keep a writing practice, you will quickly realize that most of what feels urgent doesn’t matter. it’s almost impossible to know what’s important while you’re in the middle of it… writing gives you a way to find out later. looking back at my own journals, i wince at what i ranked highest priority (usually meeting an arbitrary deadline at the expense of being there for others). when you look back, you see yourself clearly… and that can change how you behave today. we forget most of what we think, which is the determinant of how we act. writing helps us remember. writing is memory.

intercomputer realist @a16zcrypto / editor https://t.co/fGE7XDfsKo / cofounder @FortuneCrypto

avatar for Robert Hackett
Robert Hackett
Mon Nov 03 13:30:54
  • Previous
  • 1
  • More pages
  • 1211
  • 1212
  • 1213
  • More pages
  • 2127
  • Next