Compute-power giants line up, all to "win over" Anthropic
Giants jockey for long‑term capacity
Compute and cloud providers are now racing to become Anthropic’s primary infrastructure partner. Google (谷歌) and Broadcom (博通) have announced expanded cooperation to supply multi‑gigawatt next‑generation TPU capacity, CoreWeave has signed a multi‑year cloud capacity deal, and Amazon Web Services (AWS / 亚马逊云) continues to push Trainium as a production route — all while it has been reported that Anthropic is even evaluating its own chip program. This is not a series of one‑off purchases. Who wins Anthropic could shape which chip architectures and cloud models dominate large‑scale, enterprise AI deployments over the next several years.
Why Anthropic matters
Anthropic is no longer a niche research lab. Founded by ex‑OpenAI researchers and long‑focused on safety and enterprise reliability, the company has disclosed huge late‑stage fundraising and rapidly rising revenue run‑rates, and it has been reported that its customer base now includes hundreds of enterprise contracts that generate sustained compute demand. That mix — subscription and enterprise workflows rather than fleeting consumer spikes — turns short‑term GPU orders into a demand for multi‑year, highly reliable capacity and complex deployment support, making Anthropic a strategically scarce customer for upstream suppliers.
What providers are offering
The competing offers reveal three distinct supply models. Google is moving its TPU architecture outward — reportedly with roughly 3.5GW earmarked under a longer‑term arrangement — Broadcom is part of that stack for custom silicon and integration; AWS wants Trainium to prove itself inside a headline model’s production footprint; and CoreWeave represents an NVIDIA‑centric elastic cloud route prioritized for Claude workloads. It has been reported that Anthropic is also advancing data‑center plans with partners and assessing self‑developed chips, giving it negotiating leverage and forcing vendors to accommodate multi‑architecture, multi‑cloud deployments.
Bigger strategic stakes
This scramble matters beyond commercial margins. As OpenAI pursues a more vertically integrated path — building on‑prem capacity and direct chip partnerships — Anthropic’s multi‑vendor strategy sends a different signal: major models can pull the supply chain toward platform‑agnostic, enterprise‑grade delivery. Geopolitical dynamics and export controls only increase the value of flexible, diversified supply lines. In short, Anthropic has become a bellwether customer: not just for immediate revenue, but for which chip architectures, cloud organizations and deployment practices will set the industry standard.
