LogoThread Easy
  • 探索
  • 撰写 Thread
LogoThread Easy

您的一体化 Twitter 线程助手

© 2025 Thread Easy All Rights Reserved.

探索

最新在前,按卡片方式浏览线程

开启时会模糊预览图,关闭后正常显示

RT @LidongBing: 🔥 Introducing LongVT: Teaching Multimodal LLMs to "Actively Look Back" and understand long videos just like humans!
We tack…

RT @LidongBing: 🔥 Introducing LongVT: Teaching Multimodal LLMs to "Actively Look Back" and understand long videos just like humans! We tack…

AI research paper tweets, ML @Gradio (acq. by @HuggingFace 🤗) dm for promo ,submit papers here: https://t.co/UzmYN5XOCi

avatar for AK
AK
Tue Dec 02 16:05:17
People were complaining that DeepSeek still doesn't do vision. Well, why should they bother? DeepSeek (France) can take care of that part. Enjoy.

People were complaining that DeepSeek still doesn't do vision. Well, why should they bother? DeepSeek (France) can take care of that part. Enjoy.

We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1

avatar for Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Tue Dec 02 16:04:51
First western lab that manages to release first non-Thinking model that is close to SOTA Chinese OpenSource (DeepSeek, Kimi K2 etc.).

Reasoning model is on the way. What is great is it is, it is multimodal (DeepSeek and Kimi K2 are not).

Impressive!

Key things to note:
---------------------------
1. 41B active parameters and 675B total parameters 
2. trained from the ground up with 3000 H200s (not a DeepSeek fine tune)

Deployment (Single Node)
---------------------------
FP8: This model is the instruct post-trained version in FP8, fine-tuned for instruction tasks, making it ideal for chat, agentic and instruction based use cases.

1. FP8 on a single node of B200s or H200s.
2. NVFP4 on a single node of H100s or A100s.

https://t.co/82WKbULeOS

First western lab that manages to release first non-Thinking model that is close to SOTA Chinese OpenSource (DeepSeek, Kimi K2 etc.). Reasoning model is on the way. What is great is it is, it is multimodal (DeepSeek and Kimi K2 are not). Impressive! Key things to note: --------------------------- 1. 41B active parameters and 675B total parameters 2. trained from the ground up with 3000 H200s (not a DeepSeek fine tune) Deployment (Single Node) --------------------------- FP8: This model is the instruct post-trained version in FP8, fine-tuned for instruction tasks, making it ideal for chat, agentic and instruction based use cases. 1. FP8 on a single node of B200s or H200s. 2. NVFP4 on a single node of H100s or A100s. https://t.co/82WKbULeOS

Artificial Intelligence @amazon. RL, General Purpose Agents, OSS AI All views personal!

avatar for GDP at NeurIPS 2025
GDP at NeurIPS 2025
Tue Dec 02 15:57:00
Ladies and gentlemen, Dago is back! 🤩

Well, not yet, but maybe soon? 😬

All it took was seeing @jackfriks killing it with his app. Cool journey to witness btw

Keep shipping nerds!

Ladies and gentlemen, Dago is back! 🤩 Well, not yet, but maybe soon? 😬 All it took was seeing @jackfriks killing it with his app. Cool journey to witness btw Keep shipping nerds!

I build stuff. On my way to making $1M 💰 My projects 👇

avatar for Florin Pop 👨🏻‍💻
Florin Pop 👨🏻‍💻
Tue Dec 02 15:56:20
边做边学,边学边做。
多做多错,少做少错,不做不错。
多错多收获,少错少收获,不错无收获。
多多上站,踩足够多的坑,才能把技能锻炼出来,才能在后面遇到大机会时快速上站,并且尽量少犯错。
然后就能起飞了。

边做边学,边学边做。 多做多错,少做少错,不做不错。 多错多收获,少错少收获,不错无收获。 多多上站,踩足够多的坑,才能把技能锻炼出来,才能在后面遇到大机会时快速上站,并且尽量少犯错。 然后就能起飞了。

喜悦来自于汗水

avatar for 哥飞
哥飞
Tue Dec 02 15:56:16
@RashmiDVS You know, every time someone taunts India as a “Vishwa Guru” and insults the aspiration — conveniently forgetting a few thousand years of history — I feel like telling them:

Relax. The post was filled long before your hot take.

Because while some people limit their civilizational imagination to memes and mockery, a 19-year-old in Varanasi just quietly demonstrated what actual Vishwa Guru-level capability looks like.

Vedamurti Devavrat Mahesh Rekhe completed the Dandakrama Parayanam — 2,000 mantras of the Shukla Yajurveda, recited flawlessly over 50 straight days without a single break or mistake.

No headphones.
No apps.
No “download PDF.”
Just a human mind doing something so rare that it hasn’t been achieved in over 200 years.
And then my friend — a neuroscientist, not the spiritual type, not into symbolism, strictly synapses-and-data — tells me this:
“What he did is basically neuroplasticity on god-mode. You’re watching a human brain being engineered by tradition.”
I asked him to break it down for me in normal-people language.
Here’s what he said:

1. His brain physically rewired itself.

Yes, physically.
Vedic recitation at this intensity forces the brain to strengthen neural pathways for sound, rhythm, memory, and attention.
It’s cognitive bodybuilding.

2. This oral tradition is an information-preservation technology.
While other civilizations carved memory into stone, India carved it into people.
Not because we lacked writing — but because the human mind was the safer, more resilient, more portable hard drive.

3. This creates a civilizational “neural lineage.”

A guru shapes a student’s brain →
the student becomes a guru →
he shapes the next brain.
Repeat this across millennia and you get a self-sustaining chain of memory encoded in living minds.
4. AI can store data, but it can’t inherit legacy.

This is the part no Valley philosopher understands.

AI remembers files.
Humans remember meaning.
AI reproduces data.
Humans reproduce identity.
A server outage can wipe an algorithm.
But you can’t delete a tradition that’s encoded into the neurobiology of thousands of trained minds over centuries.

5. This is why these traditions have outlived every empire.

When kingdoms fell, temples burnt, and libraries vanished, the knowledge survived because the archive was walking, breathing, remembering.
And then my neuroscientist friend added one last line — the kind only a scientist can deliver with a straight face:
“That boy isn’t performing a ritual. He’s demonstrating the most advanced long-term memory protocol ever designed by humans.”

So the next time someone smirks at the idea of Vishwa Guru as if it’s some overconfident slogan, gently remind them:

A civilization that trains minds like this isn’t aspiring to be a Vishwa Guru.
It’s remembering that it already was one — and still can be.

Empires end.
Data centers fail.
But a 19-year-old, in a sacred city, reciting 2,000 mantras perfectly for 50 days?

That’s civilization whispering:
 “I am still here.”

@RashmiDVS You know, every time someone taunts India as a “Vishwa Guru” and insults the aspiration — conveniently forgetting a few thousand years of history — I feel like telling them: Relax. The post was filled long before your hot take. Because while some people limit their civilizational imagination to memes and mockery, a 19-year-old in Varanasi just quietly demonstrated what actual Vishwa Guru-level capability looks like. Vedamurti Devavrat Mahesh Rekhe completed the Dandakrama Parayanam — 2,000 mantras of the Shukla Yajurveda, recited flawlessly over 50 straight days without a single break or mistake. No headphones. No apps. No “download PDF.” Just a human mind doing something so rare that it hasn’t been achieved in over 200 years. And then my friend — a neuroscientist, not the spiritual type, not into symbolism, strictly synapses-and-data — tells me this: “What he did is basically neuroplasticity on god-mode. You’re watching a human brain being engineered by tradition.” I asked him to break it down for me in normal-people language. Here’s what he said: 1. His brain physically rewired itself. Yes, physically. Vedic recitation at this intensity forces the brain to strengthen neural pathways for sound, rhythm, memory, and attention. It’s cognitive bodybuilding. 2. This oral tradition is an information-preservation technology. While other civilizations carved memory into stone, India carved it into people. Not because we lacked writing — but because the human mind was the safer, more resilient, more portable hard drive. 3. This creates a civilizational “neural lineage.” A guru shapes a student’s brain → the student becomes a guru → he shapes the next brain. Repeat this across millennia and you get a self-sustaining chain of memory encoded in living minds. 4. AI can store data, but it can’t inherit legacy. This is the part no Valley philosopher understands. AI remembers files. Humans remember meaning. AI reproduces data. Humans reproduce identity. A server outage can wipe an algorithm. But you can’t delete a tradition that’s encoded into the neurobiology of thousands of trained minds over centuries. 5. This is why these traditions have outlived every empire. When kingdoms fell, temples burnt, and libraries vanished, the knowledge survived because the archive was walking, breathing, remembering. And then my neuroscientist friend added one last line — the kind only a scientist can deliver with a straight face: “That boy isn’t performing a ritual. He’s demonstrating the most advanced long-term memory protocol ever designed by humans.” So the next time someone smirks at the idea of Vishwa Guru as if it’s some overconfident slogan, gently remind them: A civilization that trains minds like this isn’t aspiring to be a Vishwa Guru. It’s remembering that it already was one — and still can be. Empires end. Data centers fail. But a 19-year-old, in a sacred city, reciting 2,000 mantras perfectly for 50 days? That’s civilization whispering: “I am still here.”

Allergic to Stupidity. Individual = not Secular Seeker not Believer = disqualified from being a Congressman! Liberal in the Real sense & hence not Nehruvian!

avatar for Siddhartha Speaks
Siddhartha Speaks
Tue Dec 02 15:54:08
  • Previous
  • 1
  • More pages
  • 1799
  • 1800
  • 1801
  • More pages
  • 5634
  • Next