LogoThread Easy
  • 探索
  • 撰写 Thread
LogoThread Easy

您的一体化 Twitter 线程助手

© 2025 Thread Easy All Rights Reserved.

探索

最新在前,按卡片方式浏览线程

开启时会模糊预览图,关闭后正常显示

I would like to clarify a few things.

First, the obvious one: we do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market.  If one company fails, other companies will do good work.

What we do think might make sense is governments building (and owning) their own AI infrastructure, but then the upside of that should flow to the government as well. We can imagine a world where governments decide to offtake a lot of computing power and get to decide how to use it, and it may make sense to provide lower cost of capital to do so. Building a strategic national reserve of computing power makes a lot of sense. But this should be for the government’s benefit, not the benefit of private companies.

The one area where we have discussed loan guarantees is as part of supporting the buildout of semiconductor fabs in the US, where we and other companies have responded to the government’s call and where we would be happy to help (though we did not formally apply). The basic idea there has been ensuring that the sourcing of the chip supply chain is as American as possible in order to bring jobs and industrialization back to the US, and to enhance the strategic position of the US with an independent supply chain, for the benefit of all American companies. This is of course different from governments guaranteeing private-benefit datacenter buildouts.

There are at least 3 “questions behind the question” here that are understandably causing concern.

First, “How is OpenAI going to pay for all this infrastructure it is signing up for?” We expect to end this year above $20 billion in annualized revenue run rate and grow to hundreds of billion by 2030. We are looking at commitments of about $1.4 trillion over the next 8 years. Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there; we are quite excited about our upcoming enterprise offering for example, and there are categories like new consumer devices and robotics that we also expect to be very significant. But there are also new categories we have a hard time putting specifics on like AI that can do scientific discovery, which we will touch on later.

We are also looking at ways to more directly sell compute capacity to other companies (and people); we are pretty sure the world is going to need a lot of “AI cloud”, and we are excited to offer this. We may also raise more equity or debt capital in the future.
But everything we currently see suggests that the world is going to need a great deal more computing power than what we are already planning for.

Second, “Is OpenAI trying to become too big to fail, and should the government pick winners and losers?” Our answer on this is an unequivocal no. If we screw up and can’t fix it, we should fail, and other companies will continue on doing good work and servicing customers. That’s how capitalism works and the ecosystem and economy would be fine.  We plan to be a wildly successful company, but if we get it wrong, that’s on us.

Our CFO talked about government financing yesterday, and then later clarified her point underscoring that she could have phrased things more clearly. As mentioned above, we think that the US government should have a national strategy for its own AI infrastructure.

Tyler Cowen asked me a few weeks ago about the federal government becoming the insurer of last resort for AI, in the sense of risks (like nuclear power) not about overbuild. I said “I do think the government ends up as the insurer of last resort, but I think I mean that in a different way than you mean that, and I don’t expect them to actually be writing the policies in the way that maybe they do for nuclear”. Again, this was in a totally different context than datacenter buildout, and not about bailing out a company. What we were talking about is something going catastrophically wrong—say, a rogue actor using an AI to coordinate a large-scale cyberattack that disrupts critical infrastructure—and how intentional misuse of AI could cause harm at a scale that only the government could deal with. I do not think the government should be writing insurance policies for AI companies.

Third, “Why do you need to spend so much now, instead of growing more slowly?”. We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology. Massive infrastructure projects take quite awhile to build, so we have to start now.

Based on the trends we are seeing of how people are using AI and how much of it they would like to use, we believe the risk to OpenAI of not having enough computing power is more significant and more likely than the risk of having too much. Even today, we and others have to rate limit our products and not offer new features and models because we face such a severe compute constraint.

In a world where AI can make important scientific breakthroughs but at the cost of tremendous amounts of computing power, we want to be ready to meet that moment. And we no longer think it’s in the distant future. Our mission requires us to do what we can to not wait many more years to apply AI to hard problems, like contributing to curing deadly diseases, and to bring the benefits of AGI to people as soon as possible.

Also, we want a world of abundant and cheap AI. We expect massive demand for this technology, and for it to improve people’s lives in many ways.

It is a great privilege to get to be in the arena, and to have the conviction to take a run at building infrastructure at such scale for something so important. This is the bet we are making, and given our vantage point, we feel good about it. But we of course could be wrong, and the market—not the government—will deal with it if we are.

I would like to clarify a few things. First, the obvious one: we do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work. What we do think might make sense is governments building (and owning) their own AI infrastructure, but then the upside of that should flow to the government as well. We can imagine a world where governments decide to offtake a lot of computing power and get to decide how to use it, and it may make sense to provide lower cost of capital to do so. Building a strategic national reserve of computing power makes a lot of sense. But this should be for the government’s benefit, not the benefit of private companies. The one area where we have discussed loan guarantees is as part of supporting the buildout of semiconductor fabs in the US, where we and other companies have responded to the government’s call and where we would be happy to help (though we did not formally apply). The basic idea there has been ensuring that the sourcing of the chip supply chain is as American as possible in order to bring jobs and industrialization back to the US, and to enhance the strategic position of the US with an independent supply chain, for the benefit of all American companies. This is of course different from governments guaranteeing private-benefit datacenter buildouts. There are at least 3 “questions behind the question” here that are understandably causing concern. First, “How is OpenAI going to pay for all this infrastructure it is signing up for?” We expect to end this year above $20 billion in annualized revenue run rate and grow to hundreds of billion by 2030. We are looking at commitments of about $1.4 trillion over the next 8 years. Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there; we are quite excited about our upcoming enterprise offering for example, and there are categories like new consumer devices and robotics that we also expect to be very significant. But there are also new categories we have a hard time putting specifics on like AI that can do scientific discovery, which we will touch on later. We are also looking at ways to more directly sell compute capacity to other companies (and people); we are pretty sure the world is going to need a lot of “AI cloud”, and we are excited to offer this. We may also raise more equity or debt capital in the future. But everything we currently see suggests that the world is going to need a great deal more computing power than what we are already planning for. Second, “Is OpenAI trying to become too big to fail, and should the government pick winners and losers?” Our answer on this is an unequivocal no. If we screw up and can’t fix it, we should fail, and other companies will continue on doing good work and servicing customers. That’s how capitalism works and the ecosystem and economy would be fine. We plan to be a wildly successful company, but if we get it wrong, that’s on us. Our CFO talked about government financing yesterday, and then later clarified her point underscoring that she could have phrased things more clearly. As mentioned above, we think that the US government should have a national strategy for its own AI infrastructure. Tyler Cowen asked me a few weeks ago about the federal government becoming the insurer of last resort for AI, in the sense of risks (like nuclear power) not about overbuild. I said “I do think the government ends up as the insurer of last resort, but I think I mean that in a different way than you mean that, and I don’t expect them to actually be writing the policies in the way that maybe they do for nuclear”. Again, this was in a totally different context than datacenter buildout, and not about bailing out a company. What we were talking about is something going catastrophically wrong—say, a rogue actor using an AI to coordinate a large-scale cyberattack that disrupts critical infrastructure—and how intentional misuse of AI could cause harm at a scale that only the government could deal with. I do not think the government should be writing insurance policies for AI companies. Third, “Why do you need to spend so much now, instead of growing more slowly?”. We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology. Massive infrastructure projects take quite awhile to build, so we have to start now. Based on the trends we are seeing of how people are using AI and how much of it they would like to use, we believe the risk to OpenAI of not having enough computing power is more significant and more likely than the risk of having too much. Even today, we and others have to rate limit our products and not offer new features and models because we face such a severe compute constraint. In a world where AI can make important scientific breakthroughs but at the cost of tremendous amounts of computing power, we want to be ready to meet that moment. And we no longer think it’s in the distant future. Our mission requires us to do what we can to not wait many more years to apply AI to hard problems, like contributing to curing deadly diseases, and to bring the benefits of AGI to people as soon as possible. Also, we want a world of abundant and cheap AI. We expect massive demand for this technology, and for it to improve people’s lives in many ways. It is a great privilege to get to be in the arena, and to have the conviction to take a run at building infrastructure at such scale for something so important. This is the bet we are making, and given our vantage point, we feel good about it. But we of course could be wrong, and the market—not the government—will deal with it if we are.

AI is cool i guess

avatar for Sam Altman
Sam Altman
Thu Nov 06 19:21:49
My 12yo artis is improving #proudparent

My 12yo artis is improving #proudparent

CPO at https://t.co/BNZzlkTfVp. Founder of https://t.co/hOAmca8qLm and https://t.co/dRwgbZCSOw. Coffee-making, parenting, building, exploring: RU → CN → NZ → CL → UK → NZ → PL → UK?

avatar for Stas Kulesh
Stas Kulesh
Thu Nov 06 19:21:26
According to ChatGPT, based on my audience size, I should be making at least $10-15k per month. 

Over $25k if I systemize. 🤔

According to ChatGPT, based on my audience size, I should be making at least $10-15k per month. Over $25k if I systemize. 🤔

I build stuff. On my way to making $1M 💰 My projects 👇

avatar for Florin Pop 👨🏻‍💻
Florin Pop 👨🏻‍💻
Thu Nov 06 19:20:01
原文翻译:亚马逊大裁员,怪AI,还是经济?
作者:Gergely Orosz

亚马逊再次大规模裁员,声称要提高灵活性。但这次裁员的背后,究竟是美国经济出了问题,还是AI已经开始抢饭碗了?

大家好,我是 Gergely,这是《实用工程师》邮件周刊的免费特别版。我会从资深工程师和技术管理者的视角,为你带来科技巨头和初创企业的一线动态。今天这篇文章节选自上周的 《The Pulse》完整版。订阅完整版请点这里。

在线零售巨头亚马逊上周突然宣布要裁员14,000人。最近几年,这家公司已经多次大规模裁员:
- 2023年1月:裁员18,000人。
- 2023年3月:再裁9,000人。
- 2023年11月:Alexa团队裁员数百人,为了向生成式AI(GenAI)转型。
- 2024年4月:AWS业务裁员数百人。

软件工程师不幸成了这轮裁员的“重灾区”。据GeekWire报道,仅华盛顿州就裁了2,300人,其中25%都是软件工程师。

亚马逊负责人员体验和技术的高级副总裁Beth Galetti给员工的内部信并没有明确解释原因:
“或许有人会疑惑,公司业务明明表现很好,为什么还要裁员?我们每天都在创造良好的客户体验,快速创新,取得强劲的业绩。但别忘了,世界变化很快,这一代AI技术堪比互联网诞生以来最具颠覆性的创新,企业必须精简组织结构,更扁平、更灵活,才能跟上时代节奏。”

这封信传达的意思令人困惑:业务很棒,但我们还要裁员?一般来说,公司裁员都意味着业务出了问题,但亚马逊显然不是这种情况。那么,真实原因究竟是什么?

裁员为了效率?

官方给出的解释是:
> “我们坚信必须精简组织架构,更少层级、更多责任,才能更快地服务客户和推动业务。”

这种说法听起来耳熟吗?因为2023年的大裁员基本都用了这个借口。疫情期间科技行业疯狂招聘,组织变得臃肿,决策缓慢。2023年开始,Meta率先精简了管理层,其他大厂纷纷效仿。但亚马逊之前已经进行过多次裁员,难道他们直到今天才意识到这个问题?显然,这种解释并不充分。

裁员是为了买更多GPU?

就在宣布裁员的第二天,亚马逊还发布了一个重大AI项目——Project Rainer。这是AWS史上最大的AI数据中心,配备了50万个亚马逊自主研发的Trainium2芯片。Anthropic公司将使用这些芯片来训练下一代Claude大语言模型(LLM)。

建造这样的数据中心非常烧钱。Project Rainer一个项目就投入了110亿美元。那么,裁员是为了把省下来的钱投到AI数据中心吗?

我们简单算一下:
- 亚马逊现金储备高达930亿美元。
- 自由现金流(扣除基础设施投资后的利润)为320亿美元。
- 裁员节省成本大概在20-40亿美元。

这意味着裁员节省的钱甚至不到一个Project Rainer的一半。实际上,亚马逊有足够的钱建造三个同样规模的数据中心,根本不需要额外裁员来节省资金。

“精简结构”和“AI替代人”都站不住脚

不只是我,亚马逊前高管Arne Knudson也不认可官方给出的理由:
> “我在亚马逊干了18年,经历过几次裁员,但从没见过连续数年如此大规模的裁员。这绝不仅仅是疫情后精简的理由,那些人早在2022年就已经被裁掉了。
> 
> 而说AI替代了3万人,我也不信。我的专业是AI,也曾在亚马逊负责过AI项目,目前AI自动化程度和准确度根本没达到这个水平。
>
> 从HR部门的情况来看,情况变得越来越糟。他们外包了很多工作,HR人员疲于应付,流动性很大。如果真的裁掉了大批HR人员,后续的招聘显然也会停摆。这说明亚马逊至少一年内不会扩大招聘,这次裁员肯定不仅仅是表面所说的理由。”

美国经济才是真正的幕后推手?

AWS表现依旧亮眼,但亚马逊的零售业务呢?

亚马逊作为全球最大的电商平台之一,对美国经济最为敏感。如果经济状况不佳,消费者最先削减的就是非刚需品消费,而亚马逊会是第一批感受到冲击的公司之一。

近期,餐饮连锁品牌Chipotle的CEO就透露了类似的趋势:
> “我们发现所有收入群体的消费者今年都明显减少了外出用餐的频率,尤其是家庭年收入低于10万美元的人群。这群人占了我们收入的40%。年轻人(25到35岁)群体受通胀影响尤其明显,他们的消费能力显著下降。”

物流巨头UPS也佐证了这一趋势。他们最近裁员48,000人,原因正是快递业务量下降,收入减少。

由此可见,美国整体消费力确实在下滑,电商行业必然最先感受到冲击。这才是亚马逊此次大裁员背后最合理的解释——他们在为经济低迷的到来提前做准备。

同样身为科技巨头的Google、Meta和微软没有类似大规模裁员,因为它们主要的业务并不直接面对消费者,而亚马逊不同,它直面零售市场变化,更早地感受到经济下行的迹象。

在我看来,亚马逊的决策一贯高度理性,这次的连续大裁员背后,最可信的理由恐怕还是经济——亚马逊认为,美国消费者正准备勒紧裤腰带了。

原文翻译:亚马逊大裁员,怪AI,还是经济? 作者:Gergely Orosz 亚马逊再次大规模裁员,声称要提高灵活性。但这次裁员的背后,究竟是美国经济出了问题,还是AI已经开始抢饭碗了? 大家好,我是 Gergely,这是《实用工程师》邮件周刊的免费特别版。我会从资深工程师和技术管理者的视角,为你带来科技巨头和初创企业的一线动态。今天这篇文章节选自上周的 《The Pulse》完整版。订阅完整版请点这里。 在线零售巨头亚马逊上周突然宣布要裁员14,000人。最近几年,这家公司已经多次大规模裁员: - 2023年1月:裁员18,000人。 - 2023年3月:再裁9,000人。 - 2023年11月:Alexa团队裁员数百人,为了向生成式AI(GenAI)转型。 - 2024年4月:AWS业务裁员数百人。 软件工程师不幸成了这轮裁员的“重灾区”。据GeekWire报道,仅华盛顿州就裁了2,300人,其中25%都是软件工程师。 亚马逊负责人员体验和技术的高级副总裁Beth Galetti给员工的内部信并没有明确解释原因: “或许有人会疑惑,公司业务明明表现很好,为什么还要裁员?我们每天都在创造良好的客户体验,快速创新,取得强劲的业绩。但别忘了,世界变化很快,这一代AI技术堪比互联网诞生以来最具颠覆性的创新,企业必须精简组织结构,更扁平、更灵活,才能跟上时代节奏。” 这封信传达的意思令人困惑:业务很棒,但我们还要裁员?一般来说,公司裁员都意味着业务出了问题,但亚马逊显然不是这种情况。那么,真实原因究竟是什么? 裁员为了效率? 官方给出的解释是: > “我们坚信必须精简组织架构,更少层级、更多责任,才能更快地服务客户和推动业务。” 这种说法听起来耳熟吗?因为2023年的大裁员基本都用了这个借口。疫情期间科技行业疯狂招聘,组织变得臃肿,决策缓慢。2023年开始,Meta率先精简了管理层,其他大厂纷纷效仿。但亚马逊之前已经进行过多次裁员,难道他们直到今天才意识到这个问题?显然,这种解释并不充分。 裁员是为了买更多GPU? 就在宣布裁员的第二天,亚马逊还发布了一个重大AI项目——Project Rainer。这是AWS史上最大的AI数据中心,配备了50万个亚马逊自主研发的Trainium2芯片。Anthropic公司将使用这些芯片来训练下一代Claude大语言模型(LLM)。 建造这样的数据中心非常烧钱。Project Rainer一个项目就投入了110亿美元。那么,裁员是为了把省下来的钱投到AI数据中心吗? 我们简单算一下: - 亚马逊现金储备高达930亿美元。 - 自由现金流(扣除基础设施投资后的利润)为320亿美元。 - 裁员节省成本大概在20-40亿美元。 这意味着裁员节省的钱甚至不到一个Project Rainer的一半。实际上,亚马逊有足够的钱建造三个同样规模的数据中心,根本不需要额外裁员来节省资金。 “精简结构”和“AI替代人”都站不住脚 不只是我,亚马逊前高管Arne Knudson也不认可官方给出的理由: > “我在亚马逊干了18年,经历过几次裁员,但从没见过连续数年如此大规模的裁员。这绝不仅仅是疫情后精简的理由,那些人早在2022年就已经被裁掉了。 > > 而说AI替代了3万人,我也不信。我的专业是AI,也曾在亚马逊负责过AI项目,目前AI自动化程度和准确度根本没达到这个水平。 > > 从HR部门的情况来看,情况变得越来越糟。他们外包了很多工作,HR人员疲于应付,流动性很大。如果真的裁掉了大批HR人员,后续的招聘显然也会停摆。这说明亚马逊至少一年内不会扩大招聘,这次裁员肯定不仅仅是表面所说的理由。” 美国经济才是真正的幕后推手? AWS表现依旧亮眼,但亚马逊的零售业务呢? 亚马逊作为全球最大的电商平台之一,对美国经济最为敏感。如果经济状况不佳,消费者最先削减的就是非刚需品消费,而亚马逊会是第一批感受到冲击的公司之一。 近期,餐饮连锁品牌Chipotle的CEO就透露了类似的趋势: > “我们发现所有收入群体的消费者今年都明显减少了外出用餐的频率,尤其是家庭年收入低于10万美元的人群。这群人占了我们收入的40%。年轻人(25到35岁)群体受通胀影响尤其明显,他们的消费能力显著下降。” 物流巨头UPS也佐证了这一趋势。他们最近裁员48,000人,原因正是快递业务量下降,收入减少。 由此可见,美国整体消费力确实在下滑,电商行业必然最先感受到冲击。这才是亚马逊此次大裁员背后最合理的解释——他们在为经济低迷的到来提前做准备。 同样身为科技巨头的Google、Meta和微软没有类似大规模裁员,因为它们主要的业务并不直接面对消费者,而亚马逊不同,它直面零售市场变化,更早地感受到经济下行的迹象。 在我看来,亚马逊的决策一贯高度理性,这次的连续大裁员背后,最可信的理由恐怕还是经济——亚马逊认为,美国消费者正准备勒紧裤腰带了。

Prompt Engineer, dedicated to learning and disseminating knowledge about AI, software engineering, and engineering management.

avatar for 宝玉
宝玉
Thu Nov 06 19:16:17
What a delight to host Diane Greene at our office! Diane is an inspirational leader, formerly founder & CEO of VMware, CEO of Google Cloud, and one of the first people I spoke to when I was starting @samaya_AI Still remember her advice: 

(i) Companies are built by getting deep into the details
(ii) Find problems that add value to the world
(iii) Start by having a vision of what the full scope looks like and work backwards from there to define the first product you can sell.

So wonderful to have you visit!

What a delight to host Diane Greene at our office! Diane is an inspirational leader, formerly founder & CEO of VMware, CEO of Google Cloud, and one of the first people I spoke to when I was starting @samaya_AI Still remember her advice: (i) Companies are built by getting deep into the details (ii) Find problems that add value to the world (iii) Start by having a vision of what the full scope looks like and work backwards from there to define the first product you can sell. So wonderful to have you visit!

Cofounder and CEO @Samaya_AI. Formerly Research Scientist Google Brain (@GoogleAI), PhD in ML @Cornell.

avatar for Maithra Raghu
Maithra Raghu
Thu Nov 06 19:13:27
The Pragmatic Engineer 最新的一篇文章,认为亚马逊这次裁员不是为了省钱买 GPU,也不是为了提升效率,而是通过他们电商业务数据发现美国经济出问题了。

为什么不是为了裁员买 GPU 呢?

裁员能省多少钱? 粗略估算,这1.4万人的总薪酬(工资+福利+股票),一年大概是 20亿到40亿美元。

亚马逊有多少钱? 根据最新的财报,亚马逊手握 930亿美元 的现金储备,其“自由现金流”(Free Cash Flow,可以理解为扣除所有开支后手里的活钱)高达 320亿美元。

为了省下这“区区”20-40亿,而去裁掉1.4万名员工,从财务上看,这完全说不通。亚马逊根本不差这点钱。所以,应该不是“省钱搞AI”这个理由。

亚马逊的主营业务有两大块:
1. AWS(云服务): 这块业务好得很,看看那110亿的AI项目就知道了,花钱不眨眼,客户都是大企业。
2. E-commerce(电商零售): 这块业务,才是美国乃至全球普通消费者的“体温计”。

亚马逊就像是矿井里的金丝雀,它对经济冷暖的感知比谁都灵敏。如果消费者(尤其是美国消费者)开始捂紧钱包,亚马逊会是第一个知道的。

文章中列举了两个来自其他行业的危险信号:
1. 餐饮业: 连锁快餐 Chipotle 的CEO公开表示,中低收入人群,特别是25-35岁的年轻人来吃饭的频率明显下降了。为什么?通胀、学生贷款还款压力、工资涨得慢……大家手里没钱了,只能减少“下馆子”。

2. 物流业: UPS 刚宣布了一个惊人的数字:今年已经裁员4.8万人!为什么?收入下滑了。没人寄包裹了,或者寄的包裹价值变低了。

把这些线索串起来:大家减少外出就餐(Chipotle),减少寄送包裹(UPS)……这强烈暗示着,人们也正在减少网上购物。

谷歌和 Meta 可能还没感觉到,因为它们主要靠广告收入。微软也没感觉到,它主要靠企业服务。

但亚马逊,这个靠卖货和送货起家的零售巨头,已经清清楚楚地看到了消费下滑的趋势。

这次裁员的真相,很可能不是什么“拥抱AI”的战略转型,而是一个非常传统的防御动作:在经济寒冬真正到来之前,赶紧砍掉成本,准备“过冬”。

亚马逊官方之所以要用“AI”这个光鲜亮丽的词来包装,只是因为它不想戳破泡沫、制造市场恐慌罢了。

--

文章地址:

The Pragmatic Engineer 最新的一篇文章,认为亚马逊这次裁员不是为了省钱买 GPU,也不是为了提升效率,而是通过他们电商业务数据发现美国经济出问题了。 为什么不是为了裁员买 GPU 呢? 裁员能省多少钱? 粗略估算,这1.4万人的总薪酬(工资+福利+股票),一年大概是 20亿到40亿美元。 亚马逊有多少钱? 根据最新的财报,亚马逊手握 930亿美元 的现金储备,其“自由现金流”(Free Cash Flow,可以理解为扣除所有开支后手里的活钱)高达 320亿美元。 为了省下这“区区”20-40亿,而去裁掉1.4万名员工,从财务上看,这完全说不通。亚马逊根本不差这点钱。所以,应该不是“省钱搞AI”这个理由。 亚马逊的主营业务有两大块: 1. AWS(云服务): 这块业务好得很,看看那110亿的AI项目就知道了,花钱不眨眼,客户都是大企业。 2. E-commerce(电商零售): 这块业务,才是美国乃至全球普通消费者的“体温计”。 亚马逊就像是矿井里的金丝雀,它对经济冷暖的感知比谁都灵敏。如果消费者(尤其是美国消费者)开始捂紧钱包,亚马逊会是第一个知道的。 文章中列举了两个来自其他行业的危险信号: 1. 餐饮业: 连锁快餐 Chipotle 的CEO公开表示,中低收入人群,特别是25-35岁的年轻人来吃饭的频率明显下降了。为什么?通胀、学生贷款还款压力、工资涨得慢……大家手里没钱了,只能减少“下馆子”。 2. 物流业: UPS 刚宣布了一个惊人的数字:今年已经裁员4.8万人!为什么?收入下滑了。没人寄包裹了,或者寄的包裹价值变低了。 把这些线索串起来:大家减少外出就餐(Chipotle),减少寄送包裹(UPS)……这强烈暗示着,人们也正在减少网上购物。 谷歌和 Meta 可能还没感觉到,因为它们主要靠广告收入。微软也没感觉到,它主要靠企业服务。 但亚马逊,这个靠卖货和送货起家的零售巨头,已经清清楚楚地看到了消费下滑的趋势。 这次裁员的真相,很可能不是什么“拥抱AI”的战略转型,而是一个非常传统的防御动作:在经济寒冬真正到来之前,赶紧砍掉成本,准备“过冬”。 亚马逊官方之所以要用“AI”这个光鲜亮丽的词来包装,只是因为它不想戳破泡沫、制造市场恐慌罢了。 -- 文章地址:

原文翻译:亚马逊大裁员,怪AI,还是经济? 作者:Gergely Orosz 亚马逊再次大规模裁员,声称要提高灵活性。但这次裁员的背后,究竟是美国经济出了问题,还是AI已经开始抢饭碗了? 大家好,我是 Gergely,这是《实用工程师》邮件周刊的免费特别版。我会从资深工程师和技术管理者的视角,为你带来科技巨头和初创企业的一线动态。今天这篇文章节选自上周的 《The Pulse》完整版。订阅完整版请点这里。 在线零售巨头亚马逊上周突然宣布要裁员14,000人。最近几年,这家公司已经多次大规模裁员: - 2023年1月:裁员18,000人。 - 2023年3月:再裁9,000人。 - 2023年11月:Alexa团队裁员数百人,为了向生成式AI(GenAI)转型。 - 2024年4月:AWS业务裁员数百人。 软件工程师不幸成了这轮裁员的“重灾区”。据GeekWire报道,仅华盛顿州就裁了2,300人,其中25%都是软件工程师。 亚马逊负责人员体验和技术的高级副总裁Beth Galetti给员工的内部信并没有明确解释原因: “或许有人会疑惑,公司业务明明表现很好,为什么还要裁员?我们每天都在创造良好的客户体验,快速创新,取得强劲的业绩。但别忘了,世界变化很快,这一代AI技术堪比互联网诞生以来最具颠覆性的创新,企业必须精简组织结构,更扁平、更灵活,才能跟上时代节奏。” 这封信传达的意思令人困惑:业务很棒,但我们还要裁员?一般来说,公司裁员都意味着业务出了问题,但亚马逊显然不是这种情况。那么,真实原因究竟是什么? 裁员为了效率? 官方给出的解释是: > “我们坚信必须精简组织架构,更少层级、更多责任,才能更快地服务客户和推动业务。” 这种说法听起来耳熟吗?因为2023年的大裁员基本都用了这个借口。疫情期间科技行业疯狂招聘,组织变得臃肿,决策缓慢。2023年开始,Meta率先精简了管理层,其他大厂纷纷效仿。但亚马逊之前已经进行过多次裁员,难道他们直到今天才意识到这个问题?显然,这种解释并不充分。 裁员是为了买更多GPU? 就在宣布裁员的第二天,亚马逊还发布了一个重大AI项目——Project Rainer。这是AWS史上最大的AI数据中心,配备了50万个亚马逊自主研发的Trainium2芯片。Anthropic公司将使用这些芯片来训练下一代Claude大语言模型(LLM)。 建造这样的数据中心非常烧钱。Project Rainer一个项目就投入了110亿美元。那么,裁员是为了把省下来的钱投到AI数据中心吗? 我们简单算一下: - 亚马逊现金储备高达930亿美元。 - 自由现金流(扣除基础设施投资后的利润)为320亿美元。 - 裁员节省成本大概在20-40亿美元。 这意味着裁员节省的钱甚至不到一个Project Rainer的一半。实际上,亚马逊有足够的钱建造三个同样规模的数据中心,根本不需要额外裁员来节省资金。 “精简结构”和“AI替代人”都站不住脚 不只是我,亚马逊前高管Arne Knudson也不认可官方给出的理由: > “我在亚马逊干了18年,经历过几次裁员,但从没见过连续数年如此大规模的裁员。这绝不仅仅是疫情后精简的理由,那些人早在2022年就已经被裁掉了。 > > 而说AI替代了3万人,我也不信。我的专业是AI,也曾在亚马逊负责过AI项目,目前AI自动化程度和准确度根本没达到这个水平。 > > 从HR部门的情况来看,情况变得越来越糟。他们外包了很多工作,HR人员疲于应付,流动性很大。如果真的裁掉了大批HR人员,后续的招聘显然也会停摆。这说明亚马逊至少一年内不会扩大招聘,这次裁员肯定不仅仅是表面所说的理由。” 美国经济才是真正的幕后推手? AWS表现依旧亮眼,但亚马逊的零售业务呢? 亚马逊作为全球最大的电商平台之一,对美国经济最为敏感。如果经济状况不佳,消费者最先削减的就是非刚需品消费,而亚马逊会是第一批感受到冲击的公司之一。 近期,餐饮连锁品牌Chipotle的CEO就透露了类似的趋势: > “我们发现所有收入群体的消费者今年都明显减少了外出用餐的频率,尤其是家庭年收入低于10万美元的人群。这群人占了我们收入的40%。年轻人(25到35岁)群体受通胀影响尤其明显,他们的消费能力显著下降。” 物流巨头UPS也佐证了这一趋势。他们最近裁员48,000人,原因正是快递业务量下降,收入减少。 由此可见,美国整体消费力确实在下滑,电商行业必然最先感受到冲击。这才是亚马逊此次大裁员背后最合理的解释——他们在为经济低迷的到来提前做准备。 同样身为科技巨头的Google、Meta和微软没有类似大规模裁员,因为它们主要的业务并不直接面对消费者,而亚马逊不同,它直面零售市场变化,更早地感受到经济下行的迹象。 在我看来,亚马逊的决策一贯高度理性,这次的连续大裁员背后,最可信的理由恐怕还是经济——亚马逊认为,美国消费者正准备勒紧裤腰带了。

avatar for 宝玉
宝玉
Thu Nov 06 19:12:52
  • Previous
  • 1
  • More pages
  • 667
  • 668
  • 669
  • More pages
  • 2118
  • Next