LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

in addition to @DavidGeorge83's always incredible investing insights, loved his pro tip on being deliberate about just carving out "thinking time". you could do it solo, but even could be productive with a squad - my co-founders and i used to do wednesday morning founder breakfast jam sessions at a quiet bagel shop in the 'burbs, and we probably generated more alpha from those free-wheeling discussions than any of our in-office team meetings.

in addition to @DavidGeorge83's always incredible investing insights, loved his pro tip on being deliberate about just carving out "thinking time". you could do it solo, but even could be productive with a squad - my co-founders and i used to do wednesday morning founder breakfast jam sessions at a quiet bagel shop in the 'burbs, and we probably generated more alpha from those free-wheeling discussions than any of our in-office team meetings.

GP @a16z investing in healthcare; co-founder @kyruushealth; alum of endeca, HST, MIT, SFS.

avatar for Julie Yoo
Julie Yoo
Fri Dec 05 17:52:39
I'd pay AI labs to let me include my datasets on their training.

Even models like Opus 4.5 still struggle with learning new concepts by prompting alone. No matter how much you try, you can't beat 15 T tokens of pre-training. It is just not natural to them.

For me, this is extra annoying, because being fluent on HVM demands comfort with unusual concepts, like linearity. To be fair, I consider it a failure of humanity that this is considered "niche".

Most elegant algorithms and formulas are linear, and keeping that in mind makes you a better programmer and researcher, since you explore a much better space of ideas. So, it isn't like I'm the only one who'd benefit from models understanding it...

In any case, please:

Let me help train your models to suck a bit less.

That's all I want, and nothing else...

I'd pay AI labs to let me include my datasets on their training. Even models like Opus 4.5 still struggle with learning new concepts by prompting alone. No matter how much you try, you can't beat 15 T tokens of pre-training. It is just not natural to them. For me, this is extra annoying, because being fluent on HVM demands comfort with unusual concepts, like linearity. To be fair, I consider it a failure of humanity that this is considered "niche". Most elegant algorithms and formulas are linear, and keeping that in mind makes you a better programmer and researcher, since you explore a much better space of ideas. So, it isn't like I'm the only one who'd benefit from models understanding it... In any case, please: Let me help train your models to suck a bit less. That's all I want, and nothing else...

Kind / Bend / HVM / INets / λCalculus

avatar for Taelin
Taelin
Fri Dec 05 17:52:18
RT @eviljer: Nano Banana Pro Template: 

One prompt to visualize a story's inverted universe — iconic scene above, its opposite realm flipp…

RT @eviljer: Nano Banana Pro Template: One prompt to visualize a story's inverted universe — iconic scene above, its opposite realm flipp…

Prompt Engineer, dedicated to learning and disseminating knowledge about AI, software engineering, and engineering management.

avatar for 宝玉
宝玉
Fri Dec 05 17:51:57
Wait, that's... new.

I'm using Claude Code with Sonnet 4.5 model, but for *researching* the files, it uses.. Haiku 4.5?

Probably saving the usage of tokens for more simple operations?

Not sure if that's very smart or will worsen the results.

Wait, that's... new. I'm using Claude Code with Sonnet 4.5 model, but for *researching* the files, it uses.. Haiku 4.5? Probably saving the usage of tokens for more simple operations? Not sure if that's very smart or will worsen the results.

~20 yrs in web-dev, now mostly Laravel. My Laravel courses: https://t.co/HRUAJdMRZL My Youtube channel: https://t.co/qPQAkaov2F

avatar for Povilas Korop | Laravel Courses Creator & Youtuber
Povilas Korop | Laravel Courses Creator & Youtuber
Fri Dec 05 17:50:30
First RL/DPO project on Baguettotron (and, very on-brand, RL poetry). Also indirectly answers the question whether you can diverse generation from full-synth training.

First RL/DPO project on Baguettotron (and, very on-brand, RL poetry). Also indirectly answers the question whether you can diverse generation from full-synth training.

Artisanal baker of reasoning models @pleiasfr

avatar for Alexander Doria
Alexander Doria
Fri Dec 05 17:50:19
I suspect the market is overestimating data-center power demand and underestimating the scale of broader electrification.

In a cautiously-bullish scenario, we may see >10 GW of new U.S. data-center load added annually. But these customers are effectively price-insensitive, can locate wherever power is available, and will gravitate toward regions with the fastest development timelines.

The broader electrification market looks very different. It is highly price-sensitive, has far more variable load factors, and — critically — is anchored to where people live and drive, not where the grid has capacity. These loads appear in the most constrained parts of the distribution system, not on transmission-adjacent greenfield sites.

If every home added a heat pump or an EV charger, we would need to rebuild vast stretches of the distribution grid — a challenge orders of magnitude larger than siting a handful of 100-MW data centers. The scale and complexity are fundamentally different.

Imagine if Tesla Semi's start scaling? A large truck stop in a major corridor would probably be approaching 20-30MW of supercharger capacity — that's a lot of peak power!

Rebuilding the American electric grid for the 21st century is only just beginning.

I suspect the market is overestimating data-center power demand and underestimating the scale of broader electrification. In a cautiously-bullish scenario, we may see >10 GW of new U.S. data-center load added annually. But these customers are effectively price-insensitive, can locate wherever power is available, and will gravitate toward regions with the fastest development timelines. The broader electrification market looks very different. It is highly price-sensitive, has far more variable load factors, and — critically — is anchored to where people live and drive, not where the grid has capacity. These loads appear in the most constrained parts of the distribution system, not on transmission-adjacent greenfield sites. If every home added a heat pump or an EV charger, we would need to rebuild vast stretches of the distribution grid — a challenge orders of magnitude larger than siting a handful of 100-MW data centers. The scale and complexity are fundamentally different. Imagine if Tesla Semi's start scaling? A large truck stop in a major corridor would probably be approaching 20-30MW of supercharger capacity — that's a lot of peak power! Rebuilding the American electric grid for the 21st century is only just beginning.

all i see are gigawatts of power

avatar for Ryan McEntush
Ryan McEntush
Fri Dec 05 17:49:19
  • Previous
  • 1
  • More pages
  • 1490
  • 1491
  • 1492
  • More pages
  • 5634
  • Next