LogoThread Easy
  • 探索
  • 線程創作
LogoThread Easy

Twitter 線程的一站式夥伴

© 2025 Thread Easy All Rights Reserved.

探索

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

Automakers hunt high and low for chips as supply crisis worsens

Automakers hunt high and low for chips as supply crisis worsens

Top and breaking news, pictures and videos from Reuters. For breaking business news, follow @ReutersBiz. Our daily podcast is here: https://t.co/KO0QFy0d3a

avatar for Reuters
Reuters
Wed Oct 29 17:01:06
In less than 24 hours, Melissa went from a tropical storm to a Category 4 hurricane, an extremely fast and rare intensification.

Here’s how it became so strong by the time it hit Jamaica:

In less than 24 hours, Melissa went from a tropical storm to a Category 4 hurricane, an extremely fast and rare intensification. Here’s how it became so strong by the time it hit Jamaica:

Democracy Dies in Darkness

avatar for The Washington Post
The Washington Post
Wed Oct 29 17:00:36
Pictures show damage from Jamaica after Hurricane Melissa

Pictures show damage from Jamaica after Hurricane Melissa

The pulse of the nation in the palm of your hand.

avatar for USA TODAY
USA TODAY
Wed Oct 29 17:00:23
RT @divyat09: [1/9] While pretraining data might be hitting a wall, novel methods for modeling it are just getting started!

We introduce f…

RT @divyat09: [1/9] While pretraining data might be hitting a wall, novel methods for modeling it are just getting started! We introduce f…

Asst professor @MIT EECS & CSAIL (@nlp_mit). Author of https://t.co/VgyLxl0oa1 and https://t.co/ZZaSzaRaZ7 (@DSPyOSS). Prev: CS PhD @StanfordNLP. Research @Databricks.

avatar for Omar Khattab
Omar Khattab
Wed Oct 29 17:00:02
Boeing reported a charge of nearly $5 billion related to delays in its 777X jet program

Boeing reported a charge of nearly $5 billion related to delays in its 777X jet program

Top and breaking news, pictures and videos from Reuters. For breaking business news, follow @ReutersBiz. Our daily podcast is here: https://t.co/KO0QFy0d3a

avatar for Reuters
Reuters
Wed Oct 29 17:00:01
Was especially curious to ask @karpathy why self-driving cars took a decade+ from stellar demo rides to even somewhat deployed. Andrej led AI at Tesla for 5 years.

I really wanted to know whether these frictions should lengthen our AGI timelines, or whether they were idiosyncratic to self driving.

Driving has a really high cost of failure. Humans are surprisingly reliable drivers - we have a serious accident every 400,000 miles/7 years. And self-driving cars need to match or beat this safety profile before they can be deployed.

But are most domains like this? Before the interview, it seemed to me that almost every domain we would want to plug AGI into has a much lower cost of failure. If fully autonomous software engineers weren’t allowed to make a mistake for 7 years, deployment would indeed be super slow.

Andrej made an interesting point that I hadn’t heard before: compared to self driving, software engineering has a higher (and potentially unbounded) cost of failure:

> If you’re writing actual production-grade code, any kind of mistake could lead to a security vulnerability. Hundreds of millions of people’s personal Social Security numbers could get leaked.

> In self-driving, if things go wrong, you might get injured. There are worse outcomes. But in software, it’s almost unbounded how terrible something could be.

> In some ways, software engineering is a much harder problem [than self driving]. Self-driving is just one of thousands of things that people do. It’s almost like a single vertical. Whereas when we’re talking about general software engineering, there’s more surface area.

There’s potentially another reason why the LLM -> widely deployed AGI transition might happen much faster: LLMs give us perception, representations, and common sense (to deal with out of distribution examples) for free, whereas these had to be molded from scratch for self-driving cars. I asked Andrej about this:

> I don’t know how much we’re getting for free. LLMs are still pretty fallible and they have a lot of gaps that still need to be filled in. I don’t think that we’re getting magical generalization completely out of the box.

> The other aspect that I wanted to return to is that self-driving cars are nowhere near done still. The deployments are pretty minimal. Even Waymo has very few cars. They’ve built something that lives in the future. They’ve had to pull back the future, but they had to make it uneconomical.

> Also, when you look at these cars and there’s no one driving, there’s more human-in-the-loop than you might expect. In some sense, we haven’t actually removed the person, we’ve moved them to somewhere where you can’t see them.

Was especially curious to ask @karpathy why self-driving cars took a decade+ from stellar demo rides to even somewhat deployed. Andrej led AI at Tesla for 5 years. I really wanted to know whether these frictions should lengthen our AGI timelines, or whether they were idiosyncratic to self driving. Driving has a really high cost of failure. Humans are surprisingly reliable drivers - we have a serious accident every 400,000 miles/7 years. And self-driving cars need to match or beat this safety profile before they can be deployed. But are most domains like this? Before the interview, it seemed to me that almost every domain we would want to plug AGI into has a much lower cost of failure. If fully autonomous software engineers weren’t allowed to make a mistake for 7 years, deployment would indeed be super slow. Andrej made an interesting point that I hadn’t heard before: compared to self driving, software engineering has a higher (and potentially unbounded) cost of failure: > If you’re writing actual production-grade code, any kind of mistake could lead to a security vulnerability. Hundreds of millions of people’s personal Social Security numbers could get leaked. > In self-driving, if things go wrong, you might get injured. There are worse outcomes. But in software, it’s almost unbounded how terrible something could be. > In some ways, software engineering is a much harder problem [than self driving]. Self-driving is just one of thousands of things that people do. It’s almost like a single vertical. Whereas when we’re talking about general software engineering, there’s more surface area. There’s potentially another reason why the LLM -> widely deployed AGI transition might happen much faster: LLMs give us perception, representations, and common sense (to deal with out of distribution examples) for free, whereas these had to be molded from scratch for self-driving cars. I asked Andrej about this: > I don’t know how much we’re getting for free. LLMs are still pretty fallible and they have a lot of gaps that still need to be filled in. I don’t think that we’re getting magical generalization completely out of the box. > The other aspect that I wanted to return to is that self-driving cars are nowhere near done still. The deployments are pretty minimal. Even Waymo has very few cars. They’ve built something that lives in the future. They’ve had to pull back the future, but they had to make it uneconomical. > Also, when you look at these cars and there’s no one driving, there’s more human-in-the-loop than you might expect. In some sense, we haven’t actually removed the person, we’ve moved them to somewhere where you can’t see them.

Host of @dwarkeshpodcast https://t.co/3SXlu7fy6N https://t.co/4DPAxODFYi https://t.co/hQfIWdM1Un

avatar for Dwarkesh Patel
Dwarkesh Patel
Wed Oct 29 16:59:57
  • Previous
  • 1
  • More pages
  • 1942
  • 1943
  • 1944
  • More pages
  • 2111
  • Next