LogoThread Easy
  • Explorar
  • Criar thread
LogoThread Easy

Seu parceiro completo para threads do Twitter

© 2025 Thread Easy All Rights Reserved.

Explorar

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

theres now ~50 startups doing fully automated AI influencers + ads on every platform like this

theres now ~50 startups doing fully automated AI influencers + ads on every platform like this

normally id post thoughts on this but honestly if anyone thinks This Is Good Actually at this point please just block me so we don't have to interact

avatar for near
near
Wed Nov 05 22:42:27
100 years ago Europe ran the world. Today it can’t even run itself.

100 years ago Europe ran the world. Today it can’t even run itself.

Professor of computer science at UW and author of '2040' and 'The Master Algorithm'. Into machine learning, AI, and anything that makes me curious.

avatar for Pedro Domingos
Pedro Domingos
Wed Nov 05 22:39:30
去年和字节四位“大佬”面试交流
今年他们已经走了两位
即便是大如字节也容不下大的梦想

去年和字节四位“大佬”面试交流 今年他们已经走了两位 即便是大如字节也容不下大的梦想

聊硅基 AI,看有机 Orange。

avatar for Orange AI
Orange AI
Wed Nov 05 22:31:24
干就完了!奥里给!

法国政府(我还验证了下这个网站是不是法国政府,是的)也搞了个大模型排行榜,Mistral-Medium-3.1 直接干到榜首. (这个模型即使在知识水平和多语言测试上也排不上号, 不是什么这个模型法语好其他模型法语差的问题)

该排行榜的评分标准是用户投票,然后采用Bradley-Terry统计模型计算出一个“满意度分数”

排行榜明确指出 "其反映的是平台用户的主观偏好,而不是对模型回答的事实性或准确性的评估,也并非官方推荐或技术性能评测"

我只能说法国人支持法国货的确合理嗷.

排行榜地址:

干就完了!奥里给! 法国政府(我还验证了下这个网站是不是法国政府,是的)也搞了个大模型排行榜,Mistral-Medium-3.1 直接干到榜首. (这个模型即使在知识水平和多语言测试上也排不上号, 不是什么这个模型法语好其他模型法语差的问题) 该排行榜的评分标准是用户投票,然后采用Bradley-Terry统计模型计算出一个“满意度分数” 排行榜明确指出 "其反映的是平台用户的主观偏好,而不是对模型回答的事实性或准确性的评估,也并非官方推荐或技术性能评测" 我只能说法国人支持法国货的确合理嗷. 排行榜地址:

A coder, road bike rider, server fortune teller, electronic waste collector, co-founder of KCORES, ex-director at IllaSoft, KingsoftOffice, Juejin.

avatar for karminski-牙医
karminski-牙医
Wed Nov 05 22:28:10
a 1 quadrillion parameter language model is not entirely out of the question (besides where to get all that data from)

although you'd probably need 25% more GPUs than this for context and KV cache

100,000 H100s could probably do it

a 1 quadrillion parameter language model is not entirely out of the question (besides where to get all that data from) although you'd probably need 25% more GPUs than this for context and KV cache 100,000 H100s could probably do it

the distribution is anything but normal

avatar for snwy
snwy
Wed Nov 05 22:24:48
RT @mattyp: The @XDevelopers team ships fast with Replit

RT @mattyp: The @XDevelopers team ships fast with Replit

ceo @replit. civilizationist

avatar for Amjad Masad
Amjad Masad
Wed Nov 05 22:21:25
  • Previous
  • 1
  • More pages
  • 794
  • 795
  • 796
  • More pages
  • 2111
  • Next