Keep on to blur preview images; turn off to show them clearly

We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1

!["Out model blackmailed our researcher over their affair"
"Our model tried to get out of the docker container"
😂 - it looks like only models in America do all these things, whereas models from China work like diligent knowledge factory workers!!!!
Should we ban AI research in the western hemisphere to protect humanity?
At this point, I have come to sincerely believe AI Security Research is an elaborate scam.
I am not talking about interpretability research, which is awesome & important to understand internal behavior and to reduce hallucinations.
AI security research, on the other hand, is an art to write stories that will sound convincing to boomer politicians & decision makers & common people based on their perception of how AI systems work.
Common people believe, for example
- models have infinite context, like humans
- their interactions with different users result in single unified context
- in fact, they believe there is a single instance of the model (like skynet) that is answering all the users at once. [on this one, please include Paul Graham as well]
Only if those things were true, the below news would make sense. But it is not.
But, beautiful fiction writing. I will give you that. "Out model blackmailed our researcher over their affair"
"Our model tried to get out of the docker container"
😂 - it looks like only models in America do all these things, whereas models from China work like diligent knowledge factory workers!!!!
Should we ban AI research in the western hemisphere to protect humanity?
At this point, I have come to sincerely believe AI Security Research is an elaborate scam.
I am not talking about interpretability research, which is awesome & important to understand internal behavior and to reduce hallucinations.
AI security research, on the other hand, is an art to write stories that will sound convincing to boomer politicians & decision makers & common people based on their perception of how AI systems work.
Common people believe, for example
- models have infinite context, like humans
- their interactions with different users result in single unified context
- in fact, they believe there is a single instance of the model (like skynet) that is answering all the users at once. [on this one, please include Paul Graham as well]
Only if those things were true, the below news would make sense. But it is not.
But, beautiful fiction writing. I will give you that.](/_next/image?url=https%3A%2F%2Fpbs.twimg.com%2Fprofile_images%2F1882081691973586944%2F6Eqmsf_y_400x400.jpg&w=3840&q=75)
AI @amazon. All views personal!


Professor of computer science at UW and author of '2040' and 'The Master Algorithm'. Into machine learning, AI, and anything that makes me curious.


I build stuff. On my way to making $1M 💰 My projects 👇


We're in a race. It's not USA vs China but humans and AGIs vs ape power centralization. @deepseek_ai stan #1, 2023–Deep Time «C’est la guerre.» ®1


Believing is seeing
