LogoThread Easy
  • 探索
  • 撰写 Thread
LogoThread Easy

您的一体化 Twitter 线程助手

© 2025 Thread Easy All Rights Reserved.

探索

最新在前,按卡片方式浏览线程

开启时会模糊预览图,关闭后正常显示

"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting  "possible" AI goals https://t.co/90rd1Kjc4i 

Disappointing

"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting "possible" AI goals https://t.co/90rd1Kjc4i Disappointing

Research scientist on the scalable alignment team at Google DeepMind. All views are my own.

avatar for Alex Turner
Alex Turner
Wed Nov 26 01:22:35
"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting  "possible" AI goals https://t.co/90rd1Kjc4i 

Disappointing

"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting "possible" AI goals https://t.co/90rd1Kjc4i Disappointing

Research scientist on the scalable alignment team at Google DeepMind. All views are my own.

avatar for Alex Turner
Alex Turner
Wed Nov 26 01:22:35
"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting  "possible" AI goals https://t.co/90rd1Kjc4i 

Disappointing

"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting "possible" AI goals https://t.co/90rd1Kjc4i Disappointing

Research scientist on the scalable alignment team at Google DeepMind. All views are my own.

avatar for Alex Turner
Alex Turner
Wed Nov 26 01:22:35
"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting  "possible" AI goals https://t.co/90rd1Kjc4i 

Disappointing

"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting "possible" AI goals https://t.co/90rd1Kjc4i Disappointing

Research scientist on the scalable alignment team at Google DeepMind. All views are my own.

avatar for Alex Turner
Alex Turner
Wed Nov 26 01:22:35
"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting  "possible" AI goals https://t.co/90rd1Kjc4i 

Disappointing

"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting "possible" AI goals https://t.co/90rd1Kjc4i Disappointing

(It'd be less frustrating if they didn't also have a cult contorting to insist that Y&S are some of the best reasoners who have ever lived and no one has truly found a hole in their arguments)

avatar for Alex Turner
Alex Turner
Wed Nov 26 01:22:35
"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting  "possible" AI goals https://t.co/90rd1Kjc4i 

Disappointing

"If Anyone Builds It, Everyone Dies" seems weak. Yud & Soares wasted an opportunity to make a real, strong case for AI x-risk. They instead continue to make obvious, long-debunked mistakes, like naively counting "possible" AI goals https://t.co/90rd1Kjc4i Disappointing

Research scientist on the scalable alignment team at Google DeepMind. All views are my own.

avatar for Alex Turner
Alex Turner
Wed Nov 26 01:22:35
  • Previous
  • 1
  • 2
  • Next