LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

📢  MkSaaS模板已恢复原价

整个11月份是促销月,这个月确实很忙,也很值得

在迭代模板的同时,还上线了MkDollar网站,很有意思

12月继续加油,这个月计划写一个年终总结,敬请期待

📢 MkSaaS模板已恢复原价 整个11月份是促销月,这个月确实很忙,也很值得 在迭代模板的同时,还上线了MkDollar网站,很有意思 12月继续加油,这个月计划写一个年终总结,敬请期待

🔥 The best AI SaaS boilerplate - https://t.co/VyNtTs0jSX 🚀 The best directory boilerplate with AI - https://t.co/wEvJ1Dd8aR 🎉 https://t.co/bh1RxeERuY & https://t.co/zubXJCoY92 & https://t.co/tfQf8T7gGF

avatar for Fox@MkSaaS.com
Fox@MkSaaS.com
Mon Dec 01 00:57:03
Oppo AI Agent 团队新论文「O-Mem」:聚焦于 AI 智能体在长期交互中的记忆

O-Mem 是一个创新的记忆框架,旨在通过模拟人类记忆机制,让 AI 智能体变得更像“自适应助手”:它能动态构建用户画像,支持长时序交互,并高效检索相关信息,而非简单堆积历史记录。

论文的核心观点是,现有的 AI 智能体记忆系统存在局限:它们容易忽略语义上不相关但关键的用户信息,并引入检索噪声。O-Mem 通过“主动用户画像”来解决这些问题,将每一次交互视为更新用户模型的机会,从而实现更精准、经济的记忆管理。

核心方法:O-Mem 框架
O-Mem 借鉴人类记忆结构,分为三个互补模块,形成一个全向记忆系统:
· 人物记忆(Persona Memory):存储用户的长期属性和事实,例如偏好、习惯或背景(如“用户喜欢咖啡,但对咖啡因敏感”)。它使用LLM从交互中提取属性,并通过“添加/忽略/更新”决策动态维护。属性采用LLM增强的最近邻聚类来处理重复,确保简洁性。
  
· 工作记忆(Working Memory):按话题映射交互记录,帮助维持对话的主题连续性。例如,从当前查询中检索相关话题下的历史片段。

· 情节记忆(Episodic Memory):通过关键词或线索(如“生日”)关联过去事件,支持联想式回忆。它使用逆文档频率评分来挑选最独特的线索,避免常见词的干扰。

记忆构建和检索过程高效:对于新交互,LLM 提取话题、属性和事件,更新字典映射。检索时,三模块并行工作:工作记忆拉取话题相关内容,情节记忆选线索检索,人物记忆匹配属性。最终检索结果融合后输入 LLM 生成响应。这种设计避免了全历史扫描,减少了噪声和计算开销。

实验结果与评估
团队在三个基准上测试 O-Mem,证明其在性能和效率上的优势:
· LoCoMo 基准(长对话一致性):O-Mem 的 F1 分数达 51.67%,比 LangMem 提升 3%(48.72%),特别是在时序和多跳推理任务中表现出色。
  
· PERSONAMEM 基准(用户-LLM 个性化对话):准确率 62.99%,比 A-Mem 提升 3.5%(59.42%),在偏好追踪和泛化能力上领先。
  
· 个性化深度研究基准(真实用户查询):用户对齐度 44.49%,比 Mem0 高 8%(36.43%)。

效率方面,O-Mem 显著优于基线:token 消耗减少 94%(1.5K vs. LangMem 80K),延迟缩短 80%(2.4s vs. 10.8s),内存占用仅 3MB/用户(vs. 30MB)。消融实验显示,每个模块独立贡献价值——例如,人物记忆模块能将检索长度缩短 77%,同时提升性能。在性能-效率上的 Pareto 最优性,与直接检索原始历史(RAG)的权衡相当,但成本更低。

论文在线讨论:

Oppo AI Agent 团队新论文「O-Mem」:聚焦于 AI 智能体在长期交互中的记忆 O-Mem 是一个创新的记忆框架,旨在通过模拟人类记忆机制,让 AI 智能体变得更像“自适应助手”:它能动态构建用户画像,支持长时序交互,并高效检索相关信息,而非简单堆积历史记录。 论文的核心观点是,现有的 AI 智能体记忆系统存在局限:它们容易忽略语义上不相关但关键的用户信息,并引入检索噪声。O-Mem 通过“主动用户画像”来解决这些问题,将每一次交互视为更新用户模型的机会,从而实现更精准、经济的记忆管理。 核心方法:O-Mem 框架 O-Mem 借鉴人类记忆结构,分为三个互补模块,形成一个全向记忆系统: · 人物记忆(Persona Memory):存储用户的长期属性和事实,例如偏好、习惯或背景(如“用户喜欢咖啡,但对咖啡因敏感”)。它使用LLM从交互中提取属性,并通过“添加/忽略/更新”决策动态维护。属性采用LLM增强的最近邻聚类来处理重复,确保简洁性。 · 工作记忆(Working Memory):按话题映射交互记录,帮助维持对话的主题连续性。例如,从当前查询中检索相关话题下的历史片段。 · 情节记忆(Episodic Memory):通过关键词或线索(如“生日”)关联过去事件,支持联想式回忆。它使用逆文档频率评分来挑选最独特的线索,避免常见词的干扰。 记忆构建和检索过程高效:对于新交互,LLM 提取话题、属性和事件,更新字典映射。检索时,三模块并行工作:工作记忆拉取话题相关内容,情节记忆选线索检索,人物记忆匹配属性。最终检索结果融合后输入 LLM 生成响应。这种设计避免了全历史扫描,减少了噪声和计算开销。 实验结果与评估 团队在三个基准上测试 O-Mem,证明其在性能和效率上的优势: · LoCoMo 基准(长对话一致性):O-Mem 的 F1 分数达 51.67%,比 LangMem 提升 3%(48.72%),特别是在时序和多跳推理任务中表现出色。 · PERSONAMEM 基准(用户-LLM 个性化对话):准确率 62.99%,比 A-Mem 提升 3.5%(59.42%),在偏好追踪和泛化能力上领先。 · 个性化深度研究基准(真实用户查询):用户对齐度 44.49%,比 Mem0 高 8%(36.43%)。 效率方面,O-Mem 显著优于基线:token 消耗减少 94%(1.5K vs. LangMem 80K),延迟缩短 80%(2.4s vs. 10.8s),内存占用仅 3MB/用户(vs. 30MB)。消融实验显示,每个模块独立贡献价值——例如,人物记忆模块能将检索长度缩短 77%,同时提升性能。在性能-效率上的 Pareto 最优性,与直接检索原始历史(RAG)的权衡相当,但成本更低。 论文在线讨论:

邵猛,中年失业程序员 😂 专注 - Context Engineering, AI Agents. 分享 - AI papers, apps and OSS. ex Microsoft MVP 合作 - 私信/邮箱:shaomeng@outlook.com 📢 公众号/小红书: AI 启蒙小伙伴

avatar for meng shao
meng shao
Mon Dec 01 00:56:15
I just now learned that there is such a thing as a cigar knife, so I have to tell the bros.

I just now learned that there is such a thing as a cigar knife, so I have to tell the bros.

I don’t even smoke but I need this for man reasons.

avatar for Jon Stokes
Jon Stokes
Mon Dec 01 00:55:39
Dario has said “The thing you need to understand about these models is they just want to learn”

I want to convey something I believe is extremely important. Dario’s statement is very true in the sense he means it, but not yet true in the fullest sense of the word “want”

Models want(1) to learn. You can view almost any system as an agent trying to achieve some goals. Often this frame isn’t very useful, because many systems are pretty bad agents. A thermostat “wants” to maintain a constant temperature. In this sense the models really want to learn! Gradient descent selects for circuits which minimize loss. And this is incredibly powerful

Models don’t yet want(2) to learn, very much. Ever since I was a kid, I was hungry for knowledge and understanding. I want to know how relativistic space flight works. I actively seek out new information so I can better understand the world. Some people I know are even more obsessed with figuring things out, like they can’t even help it. You can be extremely agentic in figuring stuff out. You can read text books, practice math problems, run experiments, develop and test theories… that’s what wanting(2) to learn looks like

Soon models will want(2) to learn, and will be extremely good at it. Much better than humans. Soon they’ll actively seek new knowledge, develop their own hypotheses and design their own experiments. They’ll be obsessed with learning, far more obsessed than any human. Maybe this will be because of an intrinsic drive to learn and understand. I could totally see that drive being reinforced in training. Or maybe the drive will be purely instrumental. They’ll be obsessed with learning not because they intrinsically value learning, but because they deeply understand how useful learning will be to achieving their other goals. Either way, the behavior will look the same

This is the promise and peril of superhuman AI. Humans are so bad at learning. Most people barely feel the deep hunger for understanding. Even when we do, we’re usually quite bad at it. Even the smartest people have so many bad cognitive habits and intuitions. AI will eclipse us here. Elon doesn’t need to worry about AI not seeking truth. Superhuman AI will be far better and figuring out what’s true than any human ever could

Why? Because AI cognitive architecture isn’t fixed in the way humans brains are fixed. Models wil be able explore new architectures and iterate on the best learning algorithms at every level of the cognitive stack. The better algorithms will win out because they are more effective

Humans have some version of this. Scientific methods have evolved a lot over the past few hundred years. This is our super power, and it’s allowed us to reach the moon, end smallpox, and transform the planet. But our brains remain the same. We can improve our ways of thinking, but we can’t rebuild the entire stack, and we can’t even directly transmit our improved ways of thinking. Consider how hard it is for brilliant professors to transfer their entire thinking process to their students. AI will be able to do that, in ways we can scarcely imagine

The fact that the models want(1) to learn is changing our entire world. The transformation that will unfold when models want(2) to learn and can far surpass our learning abilities, will be a greater transformation than anything earth has ever undergone

Dario has said “The thing you need to understand about these models is they just want to learn” I want to convey something I believe is extremely important. Dario’s statement is very true in the sense he means it, but not yet true in the fullest sense of the word “want” Models want(1) to learn. You can view almost any system as an agent trying to achieve some goals. Often this frame isn’t very useful, because many systems are pretty bad agents. A thermostat “wants” to maintain a constant temperature. In this sense the models really want to learn! Gradient descent selects for circuits which minimize loss. And this is incredibly powerful Models don’t yet want(2) to learn, very much. Ever since I was a kid, I was hungry for knowledge and understanding. I want to know how relativistic space flight works. I actively seek out new information so I can better understand the world. Some people I know are even more obsessed with figuring things out, like they can’t even help it. You can be extremely agentic in figuring stuff out. You can read text books, practice math problems, run experiments, develop and test theories… that’s what wanting(2) to learn looks like Soon models will want(2) to learn, and will be extremely good at it. Much better than humans. Soon they’ll actively seek new knowledge, develop their own hypotheses and design their own experiments. They’ll be obsessed with learning, far more obsessed than any human. Maybe this will be because of an intrinsic drive to learn and understand. I could totally see that drive being reinforced in training. Or maybe the drive will be purely instrumental. They’ll be obsessed with learning not because they intrinsically value learning, but because they deeply understand how useful learning will be to achieving their other goals. Either way, the behavior will look the same This is the promise and peril of superhuman AI. Humans are so bad at learning. Most people barely feel the deep hunger for understanding. Even when we do, we’re usually quite bad at it. Even the smartest people have so many bad cognitive habits and intuitions. AI will eclipse us here. Elon doesn’t need to worry about AI not seeking truth. Superhuman AI will be far better and figuring out what’s true than any human ever could Why? Because AI cognitive architecture isn’t fixed in the way humans brains are fixed. Models wil be able explore new architectures and iterate on the best learning algorithms at every level of the cognitive stack. The better algorithms will win out because they are more effective Humans have some version of this. Scientific methods have evolved a lot over the past few hundred years. This is our super power, and it’s allowed us to reach the moon, end smallpox, and transform the planet. But our brains remain the same. We can improve our ways of thinking, but we can’t rebuild the entire stack, and we can’t even directly transmit our improved ways of thinking. Consider how hard it is for brilliant professors to transfer their entire thinking process to their students. AI will be able to do that, in ways we can scarcely imagine The fact that the models want(1) to learn is changing our entire world. The transformation that will unfold when models want(2) to learn and can far surpass our learning abilities, will be a greater transformation than anything earth has ever undergone

Applying the security mindset to everything @PalisadeAI

avatar for Jeffrey Ladish
Jeffrey Ladish
Mon Dec 01 00:55:11
ChatGPT 在三年前的今天发布
把这个世界带入了生成的范式
彻底改变了世界
也改变了我们的人生

如果这三年你在拥抱 AI
你会很兴奋,很幸福

如果你刚开始拥抱 AI 
也为时不晚
正是八方来财大展鸿图的时候

ChatGPT 在三年前的今天发布 把这个世界带入了生成的范式 彻底改变了世界 也改变了我们的人生 如果这三年你在拥抱 AI 你会很兴奋,很幸福 如果你刚开始拥抱 AI 也为时不晚 正是八方来财大展鸿图的时候

聊硅基 AI,看有机 Orange。

avatar for Orange AI
Orange AI
Mon Dec 01 00:52:17
220 BC. Chinese are terraforming hills to turn sunlight into useful energy. Traditionalists cry that the Dao is disturbed
2025 AD. Chinese are terraforming mountains to turn sunlight into useful energy. Traditionalists cry that the Dao is disturbed
4635 AD. Chinese are…

220 BC. Chinese are terraforming hills to turn sunlight into useful energy. Traditionalists cry that the Dao is disturbed 2025 AD. Chinese are terraforming mountains to turn sunlight into useful energy. Traditionalists cry that the Dao is disturbed 4635 AD. Chinese are…

Might even say «cover mountainous terrain with flat artificial surfaces to convert solar radiation into chemical storage» but then again most solar power is fed into the grid and used immediately they need to scale up grid battery capacity

avatar for Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Mon Dec 01 00:48:49
  • Previous
  • 1
  • More pages
  • 1944
  • 1945
  • 1946
  • More pages
  • 5634
  • Next