Broadcom to design Google TPUs and help Anthropic scale to 3.5 GW of AI compute, SEC filing shows
Deal details and scope
According to a Broadcom (博通) filing with the U.S. Securities and Exchange Commission, the chipmaker has agreed to design and supply customized tensor processing units (TPUs) for Google (谷歌)’s next‑generation AI accelerators and to provide networking equipment and components for Google’s new AI data‑rack systems under a long‑term agreement extending through 2031. It has been reported that the expanded pact with Anthropic will give that startup access to roughly 3.5 gigawatts (GW) of TPU‑based compute starting in 2027 as part of a multi‑GW deployment plan; the deal is said to be contingent on Anthropic’s commercial growth trajectory.
Broadcom will reportedly be a key infrastructure partner for Google Cloud’s AI push, designing the TPU silicon and supplying the rack and networking elements that bind compute at scale. The filing also notes parallel work: Broadcom is collaborating with OpenAI on customized AI chips, while OpenAI continues to rely heavily on NVIDIA GPUs delivered through major cloud providers and has separately committed to securing 6 GW of AMD GPU capacity, with the first 1 GW expected later this year.
Market reaction and financial projections
Analysts are already pricing in a sharp rise in demand for bespoke AI silicon and data‑center infrastructure. It has been reported that a Mizuho Securities team led by Vijay Rakesh modeled very large revenue outcomes for Broadcom tied to Anthropic’s expansion — estimates of roughly $21 billion in AI‑related revenues in 2026 and $42 billion in 2027 were cited in secondary reports — though those analyst projections were not disclosed in the SEC filing itself and should be treated as estimates rather than confirmed contract values.
Anthropic has been bullish about demand for its Claude models; it reportedly told investors that annualized revenue has climbed to over $30 billion and that the number of enterprise customers spending more than $1 million annually has more than doubled in weeks. It has also been reported that most of the newly procured compute will be deployed in the United States, deepening Anthropic’s operational ties with Google Cloud.
Why this matters: supply chains, competition and geopolitics
Why does a Broadcom–Google–Anthropic arrangement matter beyond Silicon Valley? Large tech customers are moving to diversify away from general‑purpose GPUs toward customized accelerators and integrated rack solutions to lower cost and increase performance for generative AI workloads. That trend reshapes the economics and winners in the AI compute stack — and it intersects with geopolitics. U.S. export controls, supply‑chain restrictions and rising scrutiny of advanced semiconductors mean that where chips are designed, manufactured and deployed is as important as how powerful they are. For Western readers watching China’s own AI ambitions, the deal signals a reconfiguration of global AI infrastructure toward bespoke, cloud‑anchored ecosystems led by a small set of U.S. vendors and hyperscalers.
Taken together, the filing highlights how commercial demand from hyperscalers and AI startups is driving rapid, multibillion‑dollar investments in bespoke silicon and data‑center networking. The question now is whether incumbents such as NVIDIA can retain their dominance as customers seek vertically integrated stacks from chip designers, cloud providers and infrastructure suppliers.
