LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Scrolling through my podcast feed it feels like every show and guest is a rolling list of midlife crises with various culprits and predictions of doom.

I’m convinced if someone, really anyone, wrote a book called “How to Do Things and Have Fun” it would eclipse Dale Carnegie’s very practical self-help manual in sales and become an instant classic.

Someone should write this rosy book.

Scrolling through my podcast feed it feels like every show and guest is a rolling list of midlife crises with various culprits and predictions of doom. I’m convinced if someone, really anyone, wrote a book called “How to Do Things and Have Fun” it would eclipse Dale Carnegie’s very practical self-help manual in sales and become an instant classic. Someone should write this rosy book.

How to Win Friends is truly great because it’s easy advice anyone can follow. “Call people by their names! They’ll like you.” Easy. The equivalent here would be something like “smile even when you’re not happy. People will think you are and you’ll feel happy too.” Easy stuff!

avatar for Katherine Boyle
Katherine Boyle
Mon Nov 10 14:33:41
今天遇到一个无语的事,一个用户反馈产品有问题,我发现根本找不到这个用户,就让他发截图给我,看到截图血压都上来了,产品被像素级 copy 了,最离谱的是连客服邮箱、帮助文档地址都没改。如果是无意的,那么他真是大聪明,如果是有意的,那就太恶心了,支付他倒是记得改了,想把售后再抛给我。

今天遇到一个无语的事,一个用户反馈产品有问题,我发现根本找不到这个用户,就让他发截图给我,看到截图血压都上来了,产品被像素级 copy 了,最离谱的是连客服邮箱、帮助文档地址都没改。如果是无意的,那么他真是大聪明,如果是有意的,那就太恶心了,支付他倒是记得改了,想把售后再抛给我。

脚踏实地做梦/独立开发/降临派/新手奶爸👨‍🍼

avatar for KIWI
KIWI
Mon Nov 10 14:28:20
RT @thisisgrantlee: Today, as shared by The New York Times, we’re announcing two things:

>Our Series B at a $2.1B valuation led by @sarahd…

RT @thisisgrantlee: Today, as shared by The New York Times, we’re announcing two things: >Our Series B at a $2.1B valuation led by @sarahd…

🏗️ Love to build stuff (@runwayco, @sandboxvr, @postmates, @zynga) people love. 💸 Investor @amplitude_hq, @mercury, @owner, @elevenlabsio, @meetgamma ++

avatar for Siqi Chen
Siqi Chen
Mon Nov 10 14:27:40
When you validate your ideas, do you seek out information that confirms them or do you play devil's advocate to look for facts to invalidate them?

Are you honest with yourself about this?

(Bonus points if you actually looked for evidence to the contrary just now 🙃)

When you validate your ideas, do you seek out information that confirms them or do you play devil's advocate to look for facts to invalidate them? Are you honest with yourself about this? (Bonus points if you actually looked for evidence to the contrary just now 🙃)

Building https://t.co/od97B0HVrk and https://t.co/666FnyVVE0 in Public. Raising all the boats with kindness. 🎙️ https://t.co/6w69DZmi8H · ✍️ https://t.co/lpnor5rsTW

avatar for Arvid Kahl
Arvid Kahl
Mon Nov 10 14:26:31
I have great and bad news. 

The bad news is that I'm ABSOLUTELY fried 😴 and have to postpone the Redreach launch to tomorrow.

The great news is that I seem to be absolutely healthy 💪

Decided to do my annual 🏥 health checkup with my girlfriend here in Bangkok today.

It's basically a 4h long ordeal where they:

🩸 Do a comprehensive blood test
👀 Eye health 
💟 Heart EKG
📡 Lower abdomen ultrasound
☢️ Chest Xray 

Essentially anything that might be rekt in your body will be checked 😅

Results came back and we both seem to be luckily very healthy. Even my LDL cholesterol significantly lowered from my last visit. 

I am pretty much on a animal based diet so I naturally have higher LDL levels which is fine but still good to see it go down. 

The entire process was extremely smooth (just very tiring).

Paid a total of $355 for everything including the final doctor consultation. 

If you are able to travel I highly recommend these types of annual health checkups either in Thailand or Malaysia as they are extremely affordable and the service is outstanding.

I have great and bad news. The bad news is that I'm ABSOLUTELY fried 😴 and have to postpone the Redreach launch to tomorrow. The great news is that I seem to be absolutely healthy 💪 Decided to do my annual 🏥 health checkup with my girlfriend here in Bangkok today. It's basically a 4h long ordeal where they: 🩸 Do a comprehensive blood test 👀 Eye health 💟 Heart EKG 📡 Lower abdomen ultrasound ☢️ Chest Xray Essentially anything that might be rekt in your body will be checked 😅 Results came back and we both seem to be luckily very healthy. Even my LDL cholesterol significantly lowered from my last visit. I am pretty much on a animal based diet so I naturally have higher LDL levels which is fine but still good to see it go down. The entire process was extremely smooth (just very tiring). Paid a total of $355 for everything including the final doctor consultation. If you are able to travel I highly recommend these types of annual health checkups either in Thailand or Malaysia as they are extremely affordable and the service is outstanding.

⚡ Founder and 🌊 Surfer sharing lessons bootstrapping SaaS. ✍️ Notion Docs ➯ Help Center @HelpkitHQ 💰 Get Customers With Reddit ➯ https://t.co/sCWi6vTA7m

avatar for Dominik Sobe ツ
Dominik Sobe ツ
Mon Nov 10 14:26:03
LLM 背后的 33 个关键概念全解:避开数学公式、直击本质、从基础到实践的清晰指南

LLM 的核心基础:从文本到智能预测
LLM 是基于机器学习和自然语言处理的生成式 AI 模型,专攻文本处理。它像一个超级智能的自动补全系统:给定输入(如 “What is fine-tuning?”),模型会逐个预测下一个 token,逐步拼出完整句子。例如,它可能先输出 “Fine-tuning”,再接 “is”、“the”、“process”……

· Tokens:这是 LLM 处理文本的最小单位,包括单词、子词或标点。输入文本先被“分词器”拆解成数字 ID(如“What”对应 1023),便于模型计算。简单说,词元化让模型能高效“阅读”海量数据,但也意味着长文本可能被截断。
  
· 嵌入(Embeddings):Token ID 被转化为高维向量,这些向量在“潜在空间”(latent space)中捕捉语义相似性。比如,“狗”和“小狗”的向量很近,“国王 - 男人 + 女人 ≈ 女王”。这让模型能处理同义表达,避免死记硬背。

· 参数(Parameters):模型内部的数十亿“可调旋钮”,通过训练不断优化,编码语言模式、语法和知识。预训练(pre-training)阶段,模型在海量文本上反复预测下一个词元,积累“世界知识”。

预训练后的基础模型(base model)仅能预测文本,无法响应指令。通过指令微调(fine-tuning),它变成“指令模型”(instruct model),学会跟随用户提示。进一步的“对齐”(alignment)确保输出 helpful(有用)、honest(诚实)和 harmless(无害),常用强化学习从人类反馈(RLHF)来训练奖励模型,优先生成高质量回应。

交互与生成:提示、推理与效率
用户与 LLM 的对话靠“提示”(prompt)驱动,包括系统提示(定义角色,如“用简洁语言回答,避免偏见”)和用户提示(具体问题)。提示总长度受“上下文窗口”(context window)限制,通常几千到数十万个词元,长对话可能需截断历史。

· 零样本与少样本学习:零样本(zero-shot)直接问问题,靠模型内置知识;少样本(few-shot)在提示中加示例,引导输出格式,如提供 bullet points 来要求列表式总结。

· 推理与思维链(Chain-of-Thought, CoT):复杂问题用“一步步思考”提示,能提升准确率。新一代模型(如 Gemini 2.5 Pro)内置此机制,模拟人类逐步推理。

生成过程叫“推理”(inference),模型逐词输出,直到结束标记。影响体验的关键是延迟(latency):首词时间(TTFT)和后续词间隙。温度(temperature)参数控制随机性——低值(0.0)确保一致输出,高值激发创意,但可能偏离事实。

扩展机制:从 RAG 到智能体
LLM 并非孤立运行,常与外部工具结合,提升可靠性。

· RAG:先从数据库或网页检索相关文档,注入提示中生成回应,避免模型“幻觉”(hallucinations,即自信编造假信息)。如 Perplexity AI 搜索网络并引用来源。

· 工作流 vs. 智能体(Agent):工作流是固定步骤(如 RAG 的“检索-增强-生成”),适合重复任务。智能体则动态规划:它能自主选择工具、分解目标、执行多步操作。例如,一个智能体可搜索资料、总结成学习指南,远超静态流程的灵活性。

其他变体包括小型语言模型(SLM,参数少于 150 亿,适合设备端运行)和多模态模型(multimodal,如 GPT-4o 处理文本+图像)。开源模型(如 Llama 3.1)公开权重,便于自定义;专有模型(如 GPT-5)通过 API 访问,强调安全。

评估、挑战与未来方向
文章客观审视 LLM 的短板:幻觉(虚构事实)、推理弱点(数学常出错)、数据偏见(继承训练集刻板印象)和知识截止(训练后信息过时)。解决方案包括 RAG  grounding(锚定事实)、工具集成(如计算器)和 RLHF 缓解偏见。但这些有权衡:准确性提升往往牺牲速度或成本。

评估用基准测试(如 MMLU 测知识、HumanEval 测代码)和指标(如 faithfulness,检查是否忠实来源)。新兴“LLM 作为评判者”(LLM-as-Judge)用另一模型自动评分,加速迭代。

文章地址:

LLM 背后的 33 个关键概念全解:避开数学公式、直击本质、从基础到实践的清晰指南 LLM 的核心基础:从文本到智能预测 LLM 是基于机器学习和自然语言处理的生成式 AI 模型,专攻文本处理。它像一个超级智能的自动补全系统:给定输入(如 “What is fine-tuning?”),模型会逐个预测下一个 token,逐步拼出完整句子。例如,它可能先输出 “Fine-tuning”,再接 “is”、“the”、“process”…… · Tokens:这是 LLM 处理文本的最小单位,包括单词、子词或标点。输入文本先被“分词器”拆解成数字 ID(如“What”对应 1023),便于模型计算。简单说,词元化让模型能高效“阅读”海量数据,但也意味着长文本可能被截断。 · 嵌入(Embeddings):Token ID 被转化为高维向量,这些向量在“潜在空间”(latent space)中捕捉语义相似性。比如,“狗”和“小狗”的向量很近,“国王 - 男人 + 女人 ≈ 女王”。这让模型能处理同义表达,避免死记硬背。 · 参数(Parameters):模型内部的数十亿“可调旋钮”,通过训练不断优化,编码语言模式、语法和知识。预训练(pre-training)阶段,模型在海量文本上反复预测下一个词元,积累“世界知识”。 预训练后的基础模型(base model)仅能预测文本,无法响应指令。通过指令微调(fine-tuning),它变成“指令模型”(instruct model),学会跟随用户提示。进一步的“对齐”(alignment)确保输出 helpful(有用)、honest(诚实)和 harmless(无害),常用强化学习从人类反馈(RLHF)来训练奖励模型,优先生成高质量回应。 交互与生成:提示、推理与效率 用户与 LLM 的对话靠“提示”(prompt)驱动,包括系统提示(定义角色,如“用简洁语言回答,避免偏见”)和用户提示(具体问题)。提示总长度受“上下文窗口”(context window)限制,通常几千到数十万个词元,长对话可能需截断历史。 · 零样本与少样本学习:零样本(zero-shot)直接问问题,靠模型内置知识;少样本(few-shot)在提示中加示例,引导输出格式,如提供 bullet points 来要求列表式总结。 · 推理与思维链(Chain-of-Thought, CoT):复杂问题用“一步步思考”提示,能提升准确率。新一代模型(如 Gemini 2.5 Pro)内置此机制,模拟人类逐步推理。 生成过程叫“推理”(inference),模型逐词输出,直到结束标记。影响体验的关键是延迟(latency):首词时间(TTFT)和后续词间隙。温度(temperature)参数控制随机性——低值(0.0)确保一致输出,高值激发创意,但可能偏离事实。 扩展机制:从 RAG 到智能体 LLM 并非孤立运行,常与外部工具结合,提升可靠性。 · RAG:先从数据库或网页检索相关文档,注入提示中生成回应,避免模型“幻觉”(hallucinations,即自信编造假信息)。如 Perplexity AI 搜索网络并引用来源。 · 工作流 vs. 智能体(Agent):工作流是固定步骤(如 RAG 的“检索-增强-生成”),适合重复任务。智能体则动态规划:它能自主选择工具、分解目标、执行多步操作。例如,一个智能体可搜索资料、总结成学习指南,远超静态流程的灵活性。 其他变体包括小型语言模型(SLM,参数少于 150 亿,适合设备端运行)和多模态模型(multimodal,如 GPT-4o 处理文本+图像)。开源模型(如 Llama 3.1)公开权重,便于自定义;专有模型(如 GPT-5)通过 API 访问,强调安全。 评估、挑战与未来方向 文章客观审视 LLM 的短板:幻觉(虚构事实)、推理弱点(数学常出错)、数据偏见(继承训练集刻板印象)和知识截止(训练后信息过时)。解决方案包括 RAG grounding(锚定事实)、工具集成(如计算器)和 RLHF 缓解偏见。但这些有权衡:准确性提升往往牺牲速度或成本。 评估用基准测试(如 MMLU 测知识、HumanEval 测代码)和指标(如 faithfulness,检查是否忠实来源)。新兴“LLM 作为评判者”(LLM-as-Judge)用另一模型自动评分,加速迭代。 文章地址:

邵猛,中年失业程序员 😂 专注 - Context Engineering, AI Agents. 分享 - AI papers, apps and OSS. ex Microsoft MVP 合作 - 私信/邮箱:shaomeng@outlook.com 📢 公众号/小红书: AI 启蒙小伙伴

avatar for meng shao
meng shao
Mon Nov 10 14:22:53
  • Previous
  • 1
  • More pages
  • 269
  • 270
  • 271
  • More pages
  • 2127
  • Next