← Back to stories Artistic black and white shot of an open hard drive, showcasing internal components.
Photo by Sergei Starostin on Pexels
钛媒体 2026-03-25

Arm debuts AGI CPU, vows to sell finished data‑center silicon directly to Meta — a strategic leap into the trillion‑dollar compute market

Arm’s bold pivot: from IP landlord to silicon vendor

Arm announced its first self‑designed data‑center chip, the Arm AGI CPU, at its San Francisco developer conference. The move marks a departure from 35 years of licensing CPU designs and subsystem IP to chipmakers; Arm will now sell finished server silicon directly into hyperscale clouds, starting with Meta. The company frames the product as a foundation for "agentic" AI services and is explicitly targeting a projected data‑center total addressable market that analysts expect could exceed $1 trillion by 2030.

Technical logic: CPUs as the orchestration layer for agentic AI

Arm argues that as AI shifts from one‑off training to always‑on, interactive agents, CPUs regain central importance as coordinators of memory, scheduling and fan‑out to energy‑hungry accelerators (GPUs). According to Arm’s reference designs, it has been reported that a 36 kW air‑cooled rack could host ~8,160 cores, and a 200 kW liquid‑cooled Supermicro rack could exceed 45,000 cores, delivering more than double per‑rack throughput versus traditional x86 servers. Those density and power‑efficiency claims are central to Arm’s pitch: pair GPUs as “token factories” with high‑density AGI CPUs as the “energy‑efficient control room.”

Ecosystem ripple effects and geopolitical context

The product launch cuts across several commercial and geopolitical fault lines. Early customer engagement reportedly includes Meta and other cloud/AI players; Arm says it will open‑source server reference designs and firmware through the Open Compute Project to accelerate ecosystem adoption. But the move undermines Arm’s long‑held neutrality as an IP licensor and puts it in direct commercial competition with x86 vendors (Intel, AMD) and CPU efforts within the accelerator stack (e.g., NVIDIA’s Grace). It has been reported that downstream chip partners — including those who traditionally license Arm cores — are weighing alternatives. Against a backdrop of export controls, supply‑chain scrutiny and China’s push for semiconductor self‑reliance, this shift may also accelerate interest in fully open ISAs such as RISC‑V.

What’s at stake

Strategically, Arm’s play is a “lofted pass”: it seeks to capture more of the value now locked in server hardware margins and the orchestration layer of agentic AI. Will hyperscalers embrace a new vertically integrated silicon supplier, and can Arm retain ecosystem trust while competing with its former customers? The answer will shape who controls the next wave of cloud compute — and who profits as AI moves from training peaks to trillion‑scale, always‑on inference.

AISemiconductors
View original source →