LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

i see that 5.1 is not released for pro but i am not clear if it was being tested with pro last week.

i see that 5.1 is not released for pro but i am not clear if it was being tested with pro last week.

RL and distributed training • eXperiments lab

avatar for tokenbender
tokenbender
Thu Nov 13 07:37:30
now i need to spend some more time and attune myself to the writing ability or lack of for this new model. 

also calibrate how I feel about it's intelligence inc or decrease.

now i need to spend some more time and attune myself to the writing ability or lack of for this new model. also calibrate how I feel about it's intelligence inc or decrease.

RL and distributed training • eXperiments lab

avatar for tokenbender
tokenbender
Thu Nov 13 04:43:29
if tile-based primitives generalize across nvidia, apple and amd, then a unified, high perf programming model for multiple accelerators is plausible. that weakens the “cuda moat” and opens the ai compute landscape.

https://t.co/cT1W7yUwDx

if tile-based primitives generalize across nvidia, apple and amd, then a unified, high perf programming model for multiple accelerators is plausible. that weakens the “cuda moat” and opens the ai compute landscape. https://t.co/cT1W7yUwDx

RL and distributed training • eXperiments lab

avatar for tokenbender
tokenbender
Wed Nov 12 18:03:41
you do not need to see all the things that came out today.
you just need to see THE ONE thing that came out today

you do not need to see all the things that came out today. you just need to see THE ONE thing that came out today

if tile-based primitives generalize across nvidia, apple and amd, then a unified, high perf programming model for multiple accelerators is plausible. that weakens the “cuda moat” and opens the ai compute landscape. https://t.co/cT1W7yUwDx

avatar for tokenbender
tokenbender
Wed Nov 12 18:03:40
RT @SzymonOzog_: I made multi-node inference 25% faster with a low latency allreduce. 🧵
My first post stated: "A goal would be to be able t…

RT @SzymonOzog_: I made multi-node inference 25% faster with a low latency allreduce. 🧵 My first post stated: "A goal would be to be able t…

RL and distributed training • eXperiments lab

avatar for tokenbender
tokenbender
Wed Nov 12 15:23:54
RT @natolambert: I’m starting a new series of interviews on @interconnectsai with all the leading open model labs around the world to show…

RT @natolambert: I’m starting a new series of interviews on @interconnectsai with all the leading open model labs around the world to show…

RL and distributed training • eXperiments lab

avatar for tokenbender
tokenbender
Wed Nov 12 15:22:41
  • Previous
  • 1
  • 2
  • 3
  • More pages
  • 13
  • 14
  • Next