← Back to stories Detailed view of a robotic vehicle component showcasing wires and sensors.
Photo by Lisha Dunlap on Pexels
凤凰科技 2026-03-16

Jensen Huang (黄仁勋) says "Lobster" is the new operating system — Nvidia (英伟达) merges seven chip types into an AI compute powerhouse and reportedly eyes $1 trillion by 2027

Bold claim and new OS rhetoric

Nvidia (英伟达) CEO Jensen Huang (黄仁勋) used a marathon GTC keynote to reposition the company as an end-to-end AI stack vendor — hardware, software and even an operating metaphor. It has been reported that Huang framed "Lobster" as the new operating system for AI, a rhetorical move designed to signal that Nvidia’s software and orchestration layers will sit at the centre of future AI deployments. Why an OS metaphor? Because Nvidia wants its stack to be the layer that binds chips, racks and agents into a single platform.

Seven chip types, five rack systems — an AI supercomputer fabric

Huang unveiled Vera Rubin as a full AI supercomputing platform composed of seven chip types and five rack systems, not a single device. Key elements include Rubin GPUs paired with Vera CPUs, a new LPU inference family (Groq 3 LPX) offering on‑chip SRAM and massive bandwidth, and Spectrum‑6 SPX switches using co‑packaged optics (CPO). Nvidia also showed Kyber and Rubin Ultra rack designs, next‑gen stacked GPUs based on a Feynman architecture, and Space‑1 Vera Rubin modules aimed at putting data‑centre class AI into satellites and orbital data centres. LPU chips will reportedly be manufactured by Samsung and racks are expected to ship later this year.

Software, agents and graphics — NemoClaw, open models and DLSS 5

On the software side Nvidia introduced NemoClaw, described as the infrastructure layer for OpenClaw agents — a one‑command way to deploy AI agents with integrated models (Nemotron), runtime (OpenShell) and sandboxing for privacy and safety. The company is also expanding an "open model" family focused on agentic, physical and medical AI, and claimed a major graphics advance with DLSS 5 — a generative approach Huang likened to a "GPT moment" for graphics. Together, Nvidia pitched a compute continuum from orbit and edge (Jetson/IGX/Space‑1) to ground DGX and cloud.

Markets, targets and geopolitics

It has been reported that Huang extended a prior projection and now targets $1 trillion in cumulative revenue from data‑centre devices by 2027, a forecast that briefly lifted Nvidia’s stock intraday. Investors cheered the product breadth, but geopolitical realities complicate the roadmap. US export controls on advanced chips, global foundry dependencies and supplier choices (for example Samsung foundry work) will shape how quickly high‑end systems can be delivered worldwide — and how much of Nvidia’s vision can be monetised in sensitive markets. Can a single vendor stitch together chips, racks, satellites and agents and still navigate trade policy? That is the commercial and geopolitical test ahead.

AISemiconductorsSpace
View original source →