LogoThread Easy
  • 탐색
  • 스레드 작성
LogoThread Easy

트위터 스레드의 올인원 파트너

© 2025 Thread Easy All Rights Reserved.

탐색

Newest first — browse tweet threads

Keep on to blur preview images; turn off to show them clearly

RT @divyat09: [1/9] While pretraining data might be hitting a wall, novel methods for modeling it are just getting started!

We introduce f…

RT @divyat09: [1/9] While pretraining data might be hitting a wall, novel methods for modeling it are just getting started! We introduce f…

Asst professor @MIT EECS & CSAIL (@nlp_mit). Author of https://t.co/VgyLxl0oa1 and https://t.co/ZZaSzaRaZ7 (@DSPyOSS). Prev: CS PhD @StanfordNLP. Research @Databricks.

avatar for Omar Khattab
Omar Khattab
Wed Oct 29 17:00:02
Boeing reported a charge of nearly $5 billion related to delays in its 777X jet program

Boeing reported a charge of nearly $5 billion related to delays in its 777X jet program

Top and breaking news, pictures and videos from Reuters. For breaking business news, follow @ReutersBiz. Our daily podcast is here: https://t.co/KO0QFy0d3a

avatar for Reuters
Reuters
Wed Oct 29 17:00:01
Was especially curious to ask @karpathy why self-driving cars took a decade+ from stellar demo rides to even somewhat deployed. Andrej led AI at Tesla for 5 years.

I really wanted to know whether these frictions should lengthen our AGI timelines, or whether they were idiosyncratic to self driving.

Driving has a really high cost of failure. Humans are surprisingly reliable drivers - we have a serious accident every 400,000 miles/7 years. And self-driving cars need to match or beat this safety profile before they can be deployed.

But are most domains like this? Before the interview, it seemed to me that almost every domain we would want to plug AGI into has a much lower cost of failure. If fully autonomous software engineers weren’t allowed to make a mistake for 7 years, deployment would indeed be super slow.

Andrej made an interesting point that I hadn’t heard before: compared to self driving, software engineering has a higher (and potentially unbounded) cost of failure:

> If you’re writing actual production-grade code, any kind of mistake could lead to a security vulnerability. Hundreds of millions of people’s personal Social Security numbers could get leaked.

> In self-driving, if things go wrong, you might get injured. There are worse outcomes. But in software, it’s almost unbounded how terrible something could be.

> In some ways, software engineering is a much harder problem [than self driving]. Self-driving is just one of thousands of things that people do. It’s almost like a single vertical. Whereas when we’re talking about general software engineering, there’s more surface area.

There’s potentially another reason why the LLM -> widely deployed AGI transition might happen much faster: LLMs give us perception, representations, and common sense (to deal with out of distribution examples) for free, whereas these had to be molded from scratch for self-driving cars. I asked Andrej about this:

> I don’t know how much we’re getting for free. LLMs are still pretty fallible and they have a lot of gaps that still need to be filled in. I don’t think that we’re getting magical generalization completely out of the box.

> The other aspect that I wanted to return to is that self-driving cars are nowhere near done still. The deployments are pretty minimal. Even Waymo has very few cars. They’ve built something that lives in the future. They’ve had to pull back the future, but they had to make it uneconomical.

> Also, when you look at these cars and there’s no one driving, there’s more human-in-the-loop than you might expect. In some sense, we haven’t actually removed the person, we’ve moved them to somewhere where you can’t see them.

Was especially curious to ask @karpathy why self-driving cars took a decade+ from stellar demo rides to even somewhat deployed. Andrej led AI at Tesla for 5 years. I really wanted to know whether these frictions should lengthen our AGI timelines, or whether they were idiosyncratic to self driving. Driving has a really high cost of failure. Humans are surprisingly reliable drivers - we have a serious accident every 400,000 miles/7 years. And self-driving cars need to match or beat this safety profile before they can be deployed. But are most domains like this? Before the interview, it seemed to me that almost every domain we would want to plug AGI into has a much lower cost of failure. If fully autonomous software engineers weren’t allowed to make a mistake for 7 years, deployment would indeed be super slow. Andrej made an interesting point that I hadn’t heard before: compared to self driving, software engineering has a higher (and potentially unbounded) cost of failure: > If you’re writing actual production-grade code, any kind of mistake could lead to a security vulnerability. Hundreds of millions of people’s personal Social Security numbers could get leaked. > In self-driving, if things go wrong, you might get injured. There are worse outcomes. But in software, it’s almost unbounded how terrible something could be. > In some ways, software engineering is a much harder problem [than self driving]. Self-driving is just one of thousands of things that people do. It’s almost like a single vertical. Whereas when we’re talking about general software engineering, there’s more surface area. There’s potentially another reason why the LLM -> widely deployed AGI transition might happen much faster: LLMs give us perception, representations, and common sense (to deal with out of distribution examples) for free, whereas these had to be molded from scratch for self-driving cars. I asked Andrej about this: > I don’t know how much we’re getting for free. LLMs are still pretty fallible and they have a lot of gaps that still need to be filled in. I don’t think that we’re getting magical generalization completely out of the box. > The other aspect that I wanted to return to is that self-driving cars are nowhere near done still. The deployments are pretty minimal. Even Waymo has very few cars. They’ve built something that lives in the future. They’ve had to pull back the future, but they had to make it uneconomical. > Also, when you look at these cars and there’s no one driving, there’s more human-in-the-loop than you might expect. In some sense, we haven’t actually removed the person, we’ve moved them to somewhere where you can’t see them.

Host of @dwarkeshpodcast https://t.co/3SXlu7fy6N https://t.co/4DPAxODFYi https://t.co/hQfIWdM1Un

avatar for Dwarkesh Patel
Dwarkesh Patel
Wed Oct 29 16:59:57
As a New York loyalist and someone who grew up gripped by a Didionesque obsession with the City, it pains me to admit that San Francisco, for the first time ever, feels like the center of the world.

As a New York loyalist and someone who grew up gripped by a Didionesque obsession with the City, it pains me to admit that San Francisco, for the first time ever, feels like the center of the world.

something new :) prev @a16z @apple @hololens; math @harvard

avatar for Carra Wu
Carra Wu
Wed Oct 29 16:58:38
Follow the latest developments after research monkeys escaped following a transport truck crash in Mississippi. Download here:

Follow the latest developments after research monkeys escaped following a transport truck crash in Mississippi. Download here:

Stay Ahead. Get Breaking News Here First. Download the App.

avatar for Fox News
Fox News
Wed Oct 29 16:58:35
NO MONKEY BUSINESS: Truckload of research monkeys broke free after a transport truck crashed in Mississippi, scattering crates along the highway.

Deputies say the animals carry hepatitis C, herpes, and COVID — and one remains on the loose.

NO MONKEY BUSINESS: Truckload of research monkeys broke free after a transport truck crashed in Mississippi, scattering crates along the highway. Deputies say the animals carry hepatitis C, herpes, and COVID — and one remains on the loose.

Follow the latest developments after research monkeys escaped following a transport truck crash in Mississippi. Download here:

avatar for Fox News
Fox News
Wed Oct 29 16:58:34
  • Previous
  • 1
  • More pages
  • 1962
  • 1963
  • 1964
  • More pages
  • 2131
  • Next