LogoThread Easy
  • Explorar
  • Componer hilo
LogoThread Easy

Tu compañero integral para hilos de Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

"there's nothing interesting on arxiv these days!"  
- the words of an uncurious mind

i have personally been blown away by the volume of interesting papers posted over the last few months, and eagerly following daily digests

here are some papers i enjoyed the most:

- Pre-training under infinite compute (September 2025, https://t.co/3Q838oO6ei)
- Fresh in memory: Training-order recency is linearly encoded in language model activations (September 2025, https://t.co/V9qCttiFPJ)
- Subliminal Learning: Language models transmit behavioral traits via hidden signals in data (July 2025, https://t.co/eJrGChfq1d)
- Memory Limitations of Prompt Tuning in Transformers (September 2025, https://t.co/AJR17dkVUx)
- Behavioral Fingerprinting of Large Language Models (September 2025, https://t.co/ZdHMlIdcYP)
- Language Self-Play For Data-Free Training (September 2025, https://t.co/9kLvY8dNbe)
- The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs (September 2025, https://t.co/X7bwtKE8xe)
- Do Natural Language Descriptions of Model Activations Convey Privileged Information? (September 2025, https://t.co/4qjWhFJVUG)
- Beyond the Leaderboard: Understanding Performance Disparities in Large Language Models via Model Diffing (September 2025, https://t.co/2ejyGDCSVF)
- Stochastic activations (September 2025, https://t.co/1xoXmLeIiF)
- PonderLM-2: Pretraining LLM with Latent Thoughts in Continuous Space (September 2025, https://t.co/gZW50tvCIK)
- Words That Make Language Models Perceive (October 2025, https://t.co/IDQEXdeAGv)
- Language Models Do Not Embed Numbers Continuously (October 2025, https://t.co/g8Cw3yNcoV)
- Learning Facts at Scale with Active Reading (August 2025, https://t.co/aw3fE8dKiJ)
- OverFill: Two-Stage Models for Efficient Language Model Decoding (August 2025, https://t.co/Wku5FXbGEz)
- Retrieval Capabilities of Large Language Models Scale with Pretraining FLOPs (August 2025, https://t.co/TWgqTCHjuZ)
- Reasoning-Intensive Regression (August 2025, https://t.co/2G8Lxn323A)
- Watch the Weights: Unsupervised monitoring and control of fine-tuned LLMs (August 2025, https://t.co/im0qdNorNQ)
- On the Theoretical Limitations of Embedding-Based Retrieval (August 2025, https://t.co/7haVnfNpTp)

"there's nothing interesting on arxiv these days!" - the words of an uncurious mind i have personally been blown away by the volume of interesting papers posted over the last few months, and eagerly following daily digests here are some papers i enjoyed the most: - Pre-training under infinite compute (September 2025, https://t.co/3Q838oO6ei) - Fresh in memory: Training-order recency is linearly encoded in language model activations (September 2025, https://t.co/V9qCttiFPJ) - Subliminal Learning: Language models transmit behavioral traits via hidden signals in data (July 2025, https://t.co/eJrGChfq1d) - Memory Limitations of Prompt Tuning in Transformers (September 2025, https://t.co/AJR17dkVUx) - Behavioral Fingerprinting of Large Language Models (September 2025, https://t.co/ZdHMlIdcYP) - Language Self-Play For Data-Free Training (September 2025, https://t.co/9kLvY8dNbe) - The Illusion of Diminishing Returns: Measuring Long Horizon Execution in LLMs (September 2025, https://t.co/X7bwtKE8xe) - Do Natural Language Descriptions of Model Activations Convey Privileged Information? (September 2025, https://t.co/4qjWhFJVUG) - Beyond the Leaderboard: Understanding Performance Disparities in Large Language Models via Model Diffing (September 2025, https://t.co/2ejyGDCSVF) - Stochastic activations (September 2025, https://t.co/1xoXmLeIiF) - PonderLM-2: Pretraining LLM with Latent Thoughts in Continuous Space (September 2025, https://t.co/gZW50tvCIK) - Words That Make Language Models Perceive (October 2025, https://t.co/IDQEXdeAGv) - Language Models Do Not Embed Numbers Continuously (October 2025, https://t.co/g8Cw3yNcoV) - Learning Facts at Scale with Active Reading (August 2025, https://t.co/aw3fE8dKiJ) - OverFill: Two-Stage Models for Efficient Language Model Decoding (August 2025, https://t.co/Wku5FXbGEz) - Retrieval Capabilities of Large Language Models Scale with Pretraining FLOPs (August 2025, https://t.co/TWgqTCHjuZ) - Reasoning-Intensive Regression (August 2025, https://t.co/2G8Lxn323A) - Watch the Weights: Unsupervised monitoring and control of fine-tuned LLMs (August 2025, https://t.co/im0qdNorNQ) - On the Theoretical Limitations of Embedding-Based Retrieval (August 2025, https://t.co/7haVnfNpTp)

research @cornell // language models, information theory, science of AI

avatar for Jack Morris
Jack Morris
Mon Nov 03 13:31:25
RT @Yangzi812: 我发现的b站一位ai动画创作大佬朱雷蒙Remo Zhu老师团队创建的作品,14分钟AI科幻叙事片《傻子:壳、荣耀与幸存者》,相当惊艳,从此我们可以自己来创造“爱,死亡,与机器人”了!欢迎大家关注b站【傻子:壳、荣耀与幸存者【AI影像征集大赛-中国科…

RT @Yangzi812: 我发现的b站一位ai动画创作大佬朱雷蒙Remo Zhu老师团队创建的作品,14分钟AI科幻叙事片《傻子:壳、荣耀与幸存者》,相当惊艳,从此我们可以自己来创造“爱,死亡,与机器人”了!欢迎大家关注b站【傻子:壳、荣耀与幸存者【AI影像征集大赛-中国科…

从投资领域转到创业:找工作、找面试题、改简历、模拟面试. 创业(冷启动)|AI , AIGC | 安全技术|RAG | 时空智能 | 认知心理学|智能体 | 生命科学 | 强化学习 I built open source software at https://t.co/b69DXZhcyR

avatar for Y11
Y11
Mon Nov 03 13:31:04
yesterday i wrote about how writing clarifies thinking.

today: how it helps memory.

2. writing to remember 

our attention spans are shorter than ever. 

writing is one of the few ways to slow things down and hold onto our own thoughts. 

every note or post is a snapshot of who you were, what you noticed, and what you told yourself you cared about.

if you keep a writing practice, you will quickly realize that most of what feels urgent doesn’t matter.

it’s almost impossible to know what’s important while you’re in the middle of it… writing gives you a way to find out later.

looking back at my own journals, i wince at what i ranked highest priority (usually meeting an arbitrary deadline at the expense of being there for others). 

when you look back, you see yourself clearly… and that can change how you behave today.

we forget most of what we think, which is the determinant of how we act. writing helps us remember.

writing is memory.

yesterday i wrote about how writing clarifies thinking. today: how it helps memory. 2. writing to remember our attention spans are shorter than ever. writing is one of the few ways to slow things down and hold onto our own thoughts. every note or post is a snapshot of who you were, what you noticed, and what you told yourself you cared about. if you keep a writing practice, you will quickly realize that most of what feels urgent doesn’t matter. it’s almost impossible to know what’s important while you’re in the middle of it… writing gives you a way to find out later. looking back at my own journals, i wince at what i ranked highest priority (usually meeting an arbitrary deadline at the expense of being there for others). when you look back, you see yourself clearly… and that can change how you behave today. we forget most of what we think, which is the determinant of how we act. writing helps us remember. writing is memory.

intercomputer realist @a16zcrypto / editor https://t.co/fGE7XDfsKo / cofounder @FortuneCrypto

avatar for Robert Hackett
Robert Hackett
Mon Nov 03 13:30:54
Either this is a really great security process or a very devious way of getting people to sign up for an affiliate program.

In a way, this email is very instructive for founders, whichever is true.

Either this is a really great security process or a very devious way of getting people to sign up for an affiliate program. In a way, this email is very instructive for founders, whichever is true.

Building https://t.co/od97B0HVrk and https://t.co/666FnyVVE0 in Public. Raising all the boats with kindness. 🎙️ https://t.co/6w69DZmi8H · ✍️ https://t.co/lpnor5rsTW

avatar for Arvid Kahl
Arvid Kahl
Mon Nov 03 13:29:01
Hard to not walk away from reading this without thinking that Mira manipulated Ilya because she was mad at Sam for “undermining her.” Considering that Ilya was shocked that employees were so upset by Sam’s ouster, seems like that manipulation might not have been too difficult.

Hard to not walk away from reading this without thinking that Mira manipulated Ilya because she was mad at Sam for “undermining her.” Considering that Ilya was shocked that employees were so upset by Sam’s ouster, seems like that manipulation might not have been too difficult.

Former Quant Investor, now building @lumera (formerly called Pastel Network) | My Open Source Projects: https://t.co/9qbOCDlaqM

avatar for Jeffrey Emanuel
Jeffrey Emanuel
Mon Nov 03 13:25:00
if there was one thing i would tell me past self it would be what i tell myself every day now: stay curious and follow your genuine curiosity as much as you can, always.

if there was one thing i would tell me past self it would be what i tell myself every day now: stay curious and follow your genuine curiosity as much as you can, always.

curious guy creating things @ https://t.co/HXWladhJaA - up and coming wife guy

avatar for jack friks
jack friks
Mon Nov 03 13:23:59
  • Previous
  • 1
  • More pages
  • 1202
  • 1203
  • 1204
  • More pages
  • 2117
  • Next