LogoThread Easy
  • 探索
  • 線程創作
LogoThread Easy

Twitter 線程的一站式夥伴

© 2025 Thread Easy All Rights Reserved.

探索

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

I don't think there's a more diverse and international platform in AI than @huggingface! Current trending models are coming from all over the world in all sorts of modalities & sizes. That is AI maturing at the speed of light!

I don't think there's a more diverse and international platform in AI than @huggingface! Current trending models are coming from all over the world in all sorts of modalities & sizes. That is AI maturing at the speed of light!

Co-founder & CEO @HuggingFace 🤗, the open and collaborative platform for AI builders

avatar for clem 🤗
clem 🤗
Tue Dec 09 00:13:13
RT @benlandautaylor: As a kid I'd hear people say "America is the freest country in the world" and not really think about it.
Then in colle…

RT @benlandautaylor: As a kid I'd hear people say "America is the freest country in the world" and not really think about it. Then in colle…

Building Beneficial AGI - CEO @asi_alliance @singularitynet, @true_agi , Interim CEO @Singularity_Fi, @SophiaVerse_AI, Chair @opencog @HumanityPlus @iCog_Labs

avatar for Ben Goertzel
Ben Goertzel
Tue Dec 09 00:11:59
RT @JoshPurtell: You can now watch GEPA work on long-horizon agentic policies, for free on use synth dot ai

https://t.co/TRbr0lPkN2

RT @JoshPurtell: You can now watch GEPA work on long-horizon agentic policies, for free on use synth dot ai https://t.co/TRbr0lPkN2

Asst professor @MIT EECS & CSAIL (@nlp_mit). Author of https://t.co/VgyLxl0oa1 and https://t.co/ZZaSzaRaZ7 (@DSPyOSS). Prev: CS PhD @StanfordNLP. Research @Databricks.

avatar for Omar Khattab
Omar Khattab
Tue Dec 09 00:11:44
Hey @sama -- I get that your Code Red is about rapidly beefing up ChatGPT to beat Gemini on key metrics and meeting user needs ... HOWEVER this still won't solve your main problem which is that building big LLMs effectively is no longer differentiating...

I think OpenAI still has potential to move faster than Big Tech institutions but you'll need some new ideas not just revved up activity and focus...

If you're interested to think a bit more deeply and laterally about how to achieve AGI -- and since you've said you'd like OpenAI to be more open -- you may want to think about integrating GPT6 with some of our new ideas in Hyperon. .. 

https://t.co/GrY14xdtZo

If you shift from backprop to predictive/causal coding you may find you can make a transformer NN that learns continually, ending this mess of artificially distinct inference vs training models, and training in batches.   

And rather than artificially glomming on extra tools via RAG and APIs, you can embed transformers in a theoretically grounded integrated cognitive architecture (Hyperon), along with logic theorem provers, evolutionary program learning, nonlinear-dynamical attention allocation etc.   

There is lots of work ahead to make all these things work effectively at scale and lots of details to resolve -- but if you actually want to get to AGI (and then ASI), it's not just that bigger and bigger LLMs won't do it, no cognitive architecture with LLMs *at the center* will do it.. though absolutely LLMs can be very valuable components... 

Oh and with these tools one can build AGI on decentralized networks too (we can use the core language-of-thought of the Hyerpon AGI design, MeTTa, as a smart contract language) -- so you don't necessarily require trillions of dollars of your own monolithic infrastructure...  Of course more hardware helps but it can be distributed all over the place ;)

https://t.co/s4QJgo7366

Yes, all this is not yet matured and product-ready -- but the core infrastructure is scalable as of a couple months ago ... and when you first started OpenAI transformers were not matured and product-ready either right? ...

Just sayin' ...

Ofc I don't totally expect you to take me up on this offer to help you integrate our OSS tools in your stack ... but if you don't, then in a couple years after we've launched actually-open and decentralized AGI, I will point you to this old message and smile ;) ...

Hey @sama -- I get that your Code Red is about rapidly beefing up ChatGPT to beat Gemini on key metrics and meeting user needs ... HOWEVER this still won't solve your main problem which is that building big LLMs effectively is no longer differentiating... I think OpenAI still has potential to move faster than Big Tech institutions but you'll need some new ideas not just revved up activity and focus... If you're interested to think a bit more deeply and laterally about how to achieve AGI -- and since you've said you'd like OpenAI to be more open -- you may want to think about integrating GPT6 with some of our new ideas in Hyperon. .. https://t.co/GrY14xdtZo If you shift from backprop to predictive/causal coding you may find you can make a transformer NN that learns continually, ending this mess of artificially distinct inference vs training models, and training in batches. And rather than artificially glomming on extra tools via RAG and APIs, you can embed transformers in a theoretically grounded integrated cognitive architecture (Hyperon), along with logic theorem provers, evolutionary program learning, nonlinear-dynamical attention allocation etc. There is lots of work ahead to make all these things work effectively at scale and lots of details to resolve -- but if you actually want to get to AGI (and then ASI), it's not just that bigger and bigger LLMs won't do it, no cognitive architecture with LLMs *at the center* will do it.. though absolutely LLMs can be very valuable components... Oh and with these tools one can build AGI on decentralized networks too (we can use the core language-of-thought of the Hyerpon AGI design, MeTTa, as a smart contract language) -- so you don't necessarily require trillions of dollars of your own monolithic infrastructure... Of course more hardware helps but it can be distributed all over the place ;) https://t.co/s4QJgo7366 Yes, all this is not yet matured and product-ready -- but the core infrastructure is scalable as of a couple months ago ... and when you first started OpenAI transformers were not matured and product-ready either right? ... Just sayin' ... Ofc I don't totally expect you to take me up on this offer to help you integrate our OSS tools in your stack ... but if you don't, then in a couple years after we've launched actually-open and decentralized AGI, I will point you to this old message and smile ;) ...

Building Beneficial AGI - CEO @asi_alliance @singularitynet, @true_agi , Interim CEO @Singularity_Fi, @SophiaVerse_AI, Chair @opencog @HumanityPlus @iCog_Labs

avatar for Ben Goertzel
Ben Goertzel
Tue Dec 09 00:05:50
4️⃣ Affonso:$299 买了1年会员,用于MkSaaS的分销

https://t.co/cCLk9tFZgl

5️⃣ SaaSBoilerplates:$107 买了3个月首页的广告位

6️⃣ Trancy:$30 沉浸式翻译的替代品,感觉这个产品更有调性

7️⃣ 5个域名费用续上了,坚持独立开发,撸起袖子继续干!

4️⃣ Affonso:$299 买了1年会员,用于MkSaaS的分销 https://t.co/cCLk9tFZgl 5️⃣ SaaSBoilerplates:$107 买了3个月首页的广告位 6️⃣ Trancy:$30 沉浸式翻译的替代品,感觉这个产品更有调性 7️⃣ 5个域名费用续上了,坚持独立开发,撸起袖子继续干!

🔥 The best AI SaaS boilerplate - https://t.co/VyNtTs0jSX 🚀 The best directory boilerplate with AI - https://t.co/wEvJ1Dd8aR 🎉 https://t.co/bh1RxeERuY & https://t.co/zubXJCoY92 & https://t.co/tfQf8T7gGF

avatar for Fox@MkSaaS.com
Fox@MkSaaS.com
Mon Dec 08 23:59:02
RT @adelwu_: the king @swyx and the immaculate vibes at the 2025 dev writers retreat 🫶🌴

RT @adelwu_: the king @swyx and the immaculate vibes at the 2025 dev writers retreat 🫶🌴

achieve ambition with intentionality, intensity, & integrity - @dxtipshq - @sveltesociety - @aidotengineer - @latentspacepod - @cognition + @smol_ai

avatar for swyx #DevWritersRetreat
swyx #DevWritersRetreat
Mon Dec 08 23:58:35
  • Previous
  • 1
  • More pages
  • 1245
  • 1246
  • 1247
  • More pages
  • 5634
  • Next