LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

潮流周刊居然已经 243 期了,到今年差不多持续更新 5 年了,主要是更新看到的工程师好用的工具,开源产品,以及我的随便看看,还有随便说说,欢迎新朋友关注和订阅 RSS,话说你是什么时候知道潮流周刊的?
https://t.co/8abZ9vxSJk

潮流周刊居然已经 243 期了,到今年差不多持续更新 5 年了,主要是更新看到的工程师好用的工具,开源产品,以及我的随便看看,还有随便说说,欢迎新朋友关注和订阅 RSS,话说你是什么时候知道潮流周刊的? https://t.co/8abZ9vxSJk

Father of Pake • MiaoYan • Mole • XRender

avatar for Tw93
Tw93
Fri Nov 07 13:07:59
RT @ElKomnrs: @skominers Wordle 1,602 1/6
🙏

🟩🟩🟩🟩🟩

https://t.co/KvFo35FTy0

RT @ElKomnrs: @skominers Wordle 1,602 1/6 🙏 🟩🟩🟩🟩🟩 https://t.co/KvFo35FTy0

Market Design/Entrepreneurship Professor @HarvardHBS & Faculty Affiliate @Harvard Economics; Research @a16zcrypto; Editor @restatjournal; Econ @Quora; … | #QED

avatar for Scott Kominers
Scott Kominers
Fri Nov 07 13:05:15
Each release like this is such a compete humiliation and indictment of Meta, which pioneered open-weight LLMs with the first Llama model introduced in February of 2023. 

They’ve likely invested 100x to 1000x the resources (money, compute, PhD headcount, square footage, etc.) on a cumulative basis compared to any of these other Chinese labs (Kimi, Z, Qwen, DeepSeek, etc.). 

By all rights, they should be way ahead of everyone else. And yet they haven’t had a state of the art open-weight model, or even a modestly compelling model, since Llama 3.3, which was released at the end of 2024, nearly a year ago.

Now the Chinese labs have been leapfrogging each other like crazy, so that the latest models are extremely capable now.

Could Meta still pull a rabbit out of a hat and leapfrog these other labs as a result of all the brilliant people they’ve hired at great expense over the last few months? 

I suppose they could, but even then they’d likely be getting far, far less bang for the buck compared to these half a dozen or so Chinese labs.

This would be like if the Soviets stole the nuclear bomb secrets and then ended up testing a hydrogen bomb 2 years before the US did. Unthinkable.

Makes it a lot more understandable why Zuck has been doing a massive purge of the organization. I would want to clean house, too, in this case. And better to err on the side of caution and cut deeper to be sure you’ve removed all the rot.

Each release like this is such a compete humiliation and indictment of Meta, which pioneered open-weight LLMs with the first Llama model introduced in February of 2023. They’ve likely invested 100x to 1000x the resources (money, compute, PhD headcount, square footage, etc.) on a cumulative basis compared to any of these other Chinese labs (Kimi, Z, Qwen, DeepSeek, etc.). By all rights, they should be way ahead of everyone else. And yet they haven’t had a state of the art open-weight model, or even a modestly compelling model, since Llama 3.3, which was released at the end of 2024, nearly a year ago. Now the Chinese labs have been leapfrogging each other like crazy, so that the latest models are extremely capable now. Could Meta still pull a rabbit out of a hat and leapfrog these other labs as a result of all the brilliant people they’ve hired at great expense over the last few months? I suppose they could, but even then they’d likely be getting far, far less bang for the buck compared to these half a dozen or so Chinese labs. This would be like if the Soviets stole the nuclear bomb secrets and then ended up testing a hydrogen bomb 2 years before the US did. Unthinkable. Makes it a lot more understandable why Zuck has been doing a massive purge of the organization. I would want to clean house, too, in this case. And better to err on the side of caution and cut deeper to be sure you’ve removed all the rot.

Former Quant Investor, now building @lumera (formerly called Pastel Network) | My Open Source Projects: https://t.co/9qbOCDlaqM

avatar for Jeffrey Emanuel
Jeffrey Emanuel
Fri Nov 07 13:05:11
RT @xlr8harder: Someone from xAI reached out and asked me to retest grok-4-fast, because they've improved the injected system prompts. Huge…

RT @xlr8harder: Someone from xAI reached out and asked me to retest grok-4-fast, because they've improved the injected system prompts. Huge…

We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1

avatar for Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Teortaxes▶️ (DeepSeek 推特🐋铁粉 2023 – ∞)
Fri Nov 07 13:04:57
RT @pushpak1300: Boost has been released with several new features and important fixes. 🚀

✅ Automatic Sail detection + MCP config setup
✅…

RT @pushpak1300: Boost has been released with several new features and important fixes. 🚀 ✅ Automatic Sail detection + MCP config setup ✅…

~20 yrs in web-dev, now mostly Laravel. My Laravel courses: https://t.co/HRUAJdMRZL My Youtube channel: https://t.co/qPQAkaov2F

avatar for Povilas Korop | Laravel Courses Creator & Youtuber
Povilas Korop | Laravel Courses Creator & Youtuber
Fri Nov 07 13:03:01
My "profile pic"  is from a comic on "Our Deepfake Future" in Virginia Quarterly @vqr, drawn by Ali Fitzgerald (an artist who also published in the New Yorker and such).. 🤓 https://t.co/8ix2AwRDKo (here is a local pdf https://t.co/bga5ZbuHXZ)

My "profile pic" is from a comic on "Our Deepfake Future" in Virginia Quarterly @vqr, drawn by Ali Fitzgerald (an artist who also published in the New Yorker and such).. 🤓 https://t.co/8ix2AwRDKo (here is a local pdf https://t.co/bga5ZbuHXZ)

AI researcher & teacher @SCAI_ASU. Former President of @RealAAAI; Chair of @AAAS Sec T. Here to tweach #AI. YouTube Ch: https://t.co/4beUPOmf6y Bsky: rao2z

avatar for Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
Subbarao Kambhampati (కంభంపాటి సుబ్బారావు)
Fri Nov 07 13:02:23
  • Previous
  • 1
  • More pages
  • 581
  • 582
  • 583
  • More pages
  • 2131
  • Next