LogoThread Easy
  • Explorar
  • Criar thread
LogoThread Easy

Seu parceiro completo para threads do Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

It's crazy how much my mindset and current mood affects the quality of my vibecoding. 

"A good workman never blames his tools".

It's crazy how much my mindset and current mood affects the quality of my vibecoding. "A good workman never blames his tools".

UI/UX Designer ϟ Prev: lead designer at @super_ ✱ UI inspiration @DamnGoodUI

avatar for Josh Millgate
Josh Millgate
Mon Dec 08 11:17:34
一代人有一代人的英文缩写 😂

这两天看到有人把 Prompt Engineering 缩写为 PE,虽不理解但还能接受。再看到 Vibe Coding 被缩写为 VC/VB 还是感觉不能接受了,天王老子来了,VC 也是 Visual C++,VB 也是 Visual Basic,不接受反驳...

一代人有一代人的英文缩写 😂 这两天看到有人把 Prompt Engineering 缩写为 PE,虽不理解但还能接受。再看到 Vibe Coding 被缩写为 VC/VB 还是感觉不能接受了,天王老子来了,VC 也是 Visual C++,VB 也是 Visual Basic,不接受反驳...

邵猛,中年失业程序员 😂 专注 - Context Engineering, AI Agents. 分享 - AI papers, apps and OSS. ex Microsoft MVP 合作 - 私信/邮箱:shaomeng@outlook.com 📢 公众号/小红书: AI 启蒙小伙伴

avatar for meng shao
meng shao
Mon Dec 08 11:17:29
Vidnoz Tip: Beyond "Sad Face" — How to Control Any Emotional Shift Using Hailuo in Vidnoz

Want precise control over subtle and complex sad expressions using Hailuo in Vidnoz?
The secret is Sequential Emotional Prompting—describing the emotion as it unfolds, step by step, starting from a neutral expression.

This gives you smooth, natural transitions without the stiff or "jumpy" faces AI sometimes produces.

Here are 4 ways to guide a character's sadness, from quiet sorrow to full emotional breakdown (with prompts + templates!). ⬇️

Vidnoz Tip: Beyond "Sad Face" — How to Control Any Emotional Shift Using Hailuo in Vidnoz Want precise control over subtle and complex sad expressions using Hailuo in Vidnoz? The secret is Sequential Emotional Prompting—describing the emotion as it unfolds, step by step, starting from a neutral expression. This gives you smooth, natural transitions without the stiff or "jumpy" faces AI sometimes produces. Here are 4 ways to guide a character's sadness, from quiet sorrow to full emotional breakdown (with prompts + templates!). ⬇️

1 — The 'Subtle Sadness' Escalation (Natural Sadness) Prompt Example: A close-up of a [person's face] with a neutral expression. Slowly, the corners of their lips relax downward. Their eyes soften, blinking more slowly as a faint sadness appears. Their eyebrows pinch slightly toward the center. The expression deepens gently and naturally. Prompt Template: Fixed: A close-up of a [Subject's Face] with a neutral expression. Slowly, the corners of their lips relax downward. Their eyes soften slightly. The sadness grows naturally and gently. Variable: [Subject's Face] (person's face, character's face, elderly face, etc.)

avatar for Vidnoz
Vidnoz
Mon Dec 08 10:54:47
RT @Dom_Investing: My best perfomers in 2025:
Of course $GOOGL by a landmark, followed by my Vanguard funds.

Biggest losers: 
$UPS - is th…

RT @Dom_Investing: My best perfomers in 2025: Of course $GOOGL by a landmark, followed by my Vanguard funds. Biggest losers: $UPS - is th…

Founder 📈 @parqetapp Host of 🎙 @minimalempires Prev. @stripe

avatar for Sumit Kumar
Sumit Kumar
Mon Dec 08 10:47:11
Any time I've trained a transformer from scratch on webtext, the loss curve looks like this. The first drop makes sense, but why the second one?

Gemini is telling me nonsense.

Architecture same as gpt2 except swiglu, rope, untied embeddings

training:
muon + adam
linear warmup (up til 500 steps)

My best thought is the induction head formation meme, but my understanding is this happens quite late, like after several thousand training steps or like a billion tokens or sth, and i have 100k tokens per batch.

Any transformer training people know why this happens?

Any time I've trained a transformer from scratch on webtext, the loss curve looks like this. The first drop makes sense, but why the second one? Gemini is telling me nonsense. Architecture same as gpt2 except swiglu, rope, untied embeddings training: muon + adam linear warmup (up til 500 steps) My best thought is the induction head formation meme, but my understanding is this happens quite late, like after several thousand training steps or like a billion tokens or sth, and i have 100k tokens per batch. Any transformer training people know why this happens?

Interests: AI (Safety), meditation, philosophy, mathematics, algorithms If I say something you disagree with, please dm or quote tweet. I love to argue!

avatar for William Wale
William Wale
Mon Dec 08 10:46:32
Any time I've trained a transformer from scratch on webtext, the loss curve looks like this. The first drop makes sense, but why the second one?

Gemini is telling me nonsense.

Architecture same as gpt2 except swiglu, rope, untied embeddings

training:
muon + adam
linear warmup (up til 500 steps)

My best thought is the induction head formation meme, but my understanding is this happens quite late, like after several thousand training steps or like a billion tokens or sth, and i have 100k tokens per batch.

Any transformer training people know why this happens?

Any time I've trained a transformer from scratch on webtext, the loss curve looks like this. The first drop makes sense, but why the second one? Gemini is telling me nonsense. Architecture same as gpt2 except swiglu, rope, untied embeddings training: muon + adam linear warmup (up til 500 steps) My best thought is the induction head formation meme, but my understanding is this happens quite late, like after several thousand training steps or like a billion tokens or sth, and i have 100k tokens per batch. Any transformer training people know why this happens?

Interests: AI (Safety), meditation, philosophy, mathematics, algorithms If I say something you disagree with, please dm or quote tweet. I love to argue!

avatar for William Wale
William Wale
Mon Dec 08 10:46:32
  • Previous
  • 1
  • More pages
  • 1295
  • 1296
  • 1297
  • More pages
  • 5634
  • Next