LogoThread Easy
  • Explore
  • Thread Compose
LogoThread Easy

Your All-in-One Twitter Thread Companion

© 2025 Thread Easy All Rights Reserved.

Explore

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

LoL.

Doing the AMD deal and then the Broadcom deal *before* the Nvidia deal was finalized might have been quite a mistake.

Highlighting via @modestproposal1

LoL. Doing the AMD deal and then the Broadcom deal *before* the Nvidia deal was finalized might have been quite a mistake. Highlighting via @modestproposal1

Managing Partner & CIO, @atreidesmgmt. Husband, @l3eckyy. No investment advice, views my own. https://t.co/pFe9KmNu9U

avatar for Gavin Baker
Gavin Baker
Wed Nov 19 21:53:25
Completely agree.

Sam Altman’s manifestly ridiculous $1 trillion of spending commitments shifted the AI investing landscape. The market is more skeptical now.

Ironically makes an IPO harder for them.

Also likely ended any potential for a 1999 style melt-up which is healthy.

Completely agree. Sam Altman’s manifestly ridiculous $1 trillion of spending commitments shifted the AI investing landscape. The market is more skeptical now. Ironically makes an IPO harder for them. Also likely ended any potential for a 1999 style melt-up which is healthy.

And nice that silly press releases should be less relevant now.

avatar for Gavin Baker
Gavin Baker
Sat Nov 15 20:12:13
Yesterday, @RealJimChanos posited that Tesla’s relatively low capex meant that they were not a serious competitor in real world AI and Robotics.

This is *exactly* the wrong way to look at it and the implications of this fact are actually positive for Tesla IMO.

Tesla’s inference definitionally happens in the car so their customers are effectively paying for the inference compute “capex,” which is now probably the majority of hyperscaler capex spend.

Tesla’s capex might be an order of magnitude higher if they had to synthetically generate relevant driving data in a datacenter. Customer subsidized vertical integration is beautiful.

This is also why at some point Tesla customers will be able to put their cars into a pool of distributed edge compute and earn money when the car is not driving - same way that Akamai and Cloudflare are putting single GPUs in their edge nodes.

The Tesla fleet as the world’s largest, most distributed CDN for AI (and only AI as obviously can’t cache content in cars) is a real possibility. BYD will have a similar opportunity and similar inference cost advantage.

Beyond this significant inference cost advantage, Tesla has the second largest coherent Hopper cluster - behind only xAI - in the world for pre-training. You only need one coherent cluster *if* it is large enough. Coherent cluster size drives capital efficiency for pre-training.

No one has been able to match the xAI and Tesla clusters from a coherence, speed and cost perspective with coherence being the most important. This is why Jensen described their datacenter design and execution as “superhuman.” Should note that Tesla also has an AI4 cluster for post-training or mid-training or whatever we are calling it these days.

Tesla also has a significant data advantage for training Chinchilla optimal FSD models as real world video scales infinitely and this data advantage further lowers their capitalized training cost - less synthetic data generation and 3P data sourcing/labeling vs. labs training LLMs.

This relative capital efficiency as a result of all these advantages - the largest coherent cluster, customers paying for inference, dataset size and ongoing data generation cost - is likely to matter vs. Robotics and FSD competitors who are less capital efficient.

Cost per token is everything for AI. Google is the low cost producer of LLM tokens (with xAI as #2) but Tesla is the lowest cost producer of tokens that matter for FSD and Robotics.

AI is the first time in my career that being the low cost producer has mattered as token quantity effectively drives quality in a reasoning world. I think this dynamic is very underappreciated by the market.

Tesla might very well be outcompeted by an FSD competitor - unlikely from my perspective but anything is possible - but this will not happen because of their relative capex spend.

If LLM inference happened at the edge on phones and PCs as with FSD, hyperscaler capex would be *much* lower. This is the real risk to datacenter spending, not all the value/macro takes. Btw - memory is the biggest winner in this scenario which is years out if scaling laws continue to hold.

Jim is a smart guy but I humbly think his AI takes are misinformed.

Also so strange to me that anyone is focused on AI as a bubble given the extremely obvious quantum and nuclear bubbles where there are loads of equities that can decline 99% and still be overvalued.

Yesterday, @RealJimChanos posited that Tesla’s relatively low capex meant that they were not a serious competitor in real world AI and Robotics. This is *exactly* the wrong way to look at it and the implications of this fact are actually positive for Tesla IMO. Tesla’s inference definitionally happens in the car so their customers are effectively paying for the inference compute “capex,” which is now probably the majority of hyperscaler capex spend. Tesla’s capex might be an order of magnitude higher if they had to synthetically generate relevant driving data in a datacenter. Customer subsidized vertical integration is beautiful. This is also why at some point Tesla customers will be able to put their cars into a pool of distributed edge compute and earn money when the car is not driving - same way that Akamai and Cloudflare are putting single GPUs in their edge nodes. The Tesla fleet as the world’s largest, most distributed CDN for AI (and only AI as obviously can’t cache content in cars) is a real possibility. BYD will have a similar opportunity and similar inference cost advantage. Beyond this significant inference cost advantage, Tesla has the second largest coherent Hopper cluster - behind only xAI - in the world for pre-training. You only need one coherent cluster *if* it is large enough. Coherent cluster size drives capital efficiency for pre-training. No one has been able to match the xAI and Tesla clusters from a coherence, speed and cost perspective with coherence being the most important. This is why Jensen described their datacenter design and execution as “superhuman.” Should note that Tesla also has an AI4 cluster for post-training or mid-training or whatever we are calling it these days. Tesla also has a significant data advantage for training Chinchilla optimal FSD models as real world video scales infinitely and this data advantage further lowers their capitalized training cost - less synthetic data generation and 3P data sourcing/labeling vs. labs training LLMs. This relative capital efficiency as a result of all these advantages - the largest coherent cluster, customers paying for inference, dataset size and ongoing data generation cost - is likely to matter vs. Robotics and FSD competitors who are less capital efficient. Cost per token is everything for AI. Google is the low cost producer of LLM tokens (with xAI as #2) but Tesla is the lowest cost producer of tokens that matter for FSD and Robotics. AI is the first time in my career that being the low cost producer has mattered as token quantity effectively drives quality in a reasoning world. I think this dynamic is very underappreciated by the market. Tesla might very well be outcompeted by an FSD competitor - unlikely from my perspective but anything is possible - but this will not happen because of their relative capex spend. If LLM inference happened at the edge on phones and PCs as with FSD, hyperscaler capex would be *much* lower. This is the real risk to datacenter spending, not all the value/macro takes. Btw - memory is the biggest winner in this scenario which is years out if scaling laws continue to hold. Jim is a smart guy but I humbly think his AI takes are misinformed. Also so strange to me that anyone is focused on AI as a bubble given the extremely obvious quantum and nuclear bubbles where there are loads of equities that can decline 99% and still be overvalued.

Managing Partner & CIO, @atreidesmgmt. Husband, @l3eckyy. No investment advice, views my own. https://t.co/pFe9KmNu9U

avatar for Gavin Baker
Gavin Baker
Sat Nov 15 14:52:31
Kioxia signing an LTA with Apple right before spot prices went vertical is a little funny.

Apple is so good at managing their DRAM and NAND exposure.

Kioxia signing an LTA with Apple right before spot prices went vertical is a little funny. Apple is so good at managing their DRAM and NAND exposure.

Managing Partner & CIO, @atreidesmgmt. Husband, @l3eckyy. No investment advice, views my own. https://t.co/pFe9KmNu9U

avatar for Gavin Baker
Gavin Baker
Fri Nov 14 13:53:40
Why are value and macro investors so comfortable making highly confident prognostications about AI despite their manifest ignorance?

I don't recall many tech/growth investors penning long, bearish, highly confident and utterly ignorant analyses about energy in 2022/2023.

Why are value and macro investors so comfortable making highly confident prognostications about AI despite their manifest ignorance? I don't recall many tech/growth investors penning long, bearish, highly confident and utterly ignorant analyses about energy in 2022/2023.

Managing Partner & CIO, @atreidesmgmt. Husband, @l3eckyy. No investment advice, views my own. https://t.co/pFe9KmNu9U

avatar for Gavin Baker
Gavin Baker
Wed Nov 12 18:47:20
Someone should look into what happened after SoftBank sold their entire 4.9% stake in Nvidia in 2019.

Someone should look into what happened after SoftBank sold their entire 4.9% stake in Nvidia in 2019.

Managing Partner & CIO, @atreidesmgmt. Husband, @l3eckyy. No investment advice, views my own. https://t.co/pFe9KmNu9U

avatar for Gavin Baker
Gavin Baker
Tue Nov 11 20:46:33
  • Previous
  • 1
  • 2
  • Next