← Back to stories Detailed macro view of a circuit board showcasing microchips and electronic components.
Photo by Pixabay on Pexels
凤凰科技 2026-04-18

Jensen Huang: “Paper Tigers” and a Five‑Layer Moat — Why Nvidia Belongs at the Center of AI

Huang doubles down on Nvidia’s ecosystem, calls rivals “paper tigers”

Nvidia CEO Jensen Huang delivered a combative, wide‑ranging defense of his company’s position in AI during a long podcast appearance, arguing that competitors’ self‑developed chips are largely “paper tigers” and that Nvidia’s advantage is an ecosystem, not a single part. He conceded one misread — underestimating Anthropic — but framed that as an exception rather than a trend. Is Nvidia unassailable? Huang’s answer: not because of raw silicon alone, but because of a “five‑layer cake” of software, interconnects, packaging, memory and partner relationships that is extremely hard to replicate.

Huang said Nvidia designs the software stack, then relies on foundries and memory suppliers — notably TSMC (台积电) and Micron (美光) — for manufacturing and HBM. He argued that convincing those suppliers to commit capacity years ahead is part of the moat. He also played down fears that supply shortages are permanent, saying historically such hardware bottlenecks are resolved in two to three years with big capital investment, prefetched by Nvidia’s licensing, partnerships and direct funding.

Performance claims, benchmarks and a bet on CUDA

Huang credited Blackwell’s large generational leap over Hopper to deep co‑design with CUDA and new architectures like Mixture of Experts, claiming orders‑of‑magnitude gains and saying that only Nvidia’s experts can push hardware to its full potential. He publicly invited rivals using TPUs or AWS Trainium to submit to third‑party tests such as MLPerf and InferenceMAX to settle performance debates. He also noted — and this is his contention — that two of the world’s top three large models were trained on Google’s TPU, a point he used to explain why some customers still diversify compute bets.

On funding, it has been reported that Nvidia has made unusually large strategic commitments to big AI model companies — reportedly on the order of $30 billion and $10 billion to OpenAI and Anthropic respectively — moves Huang framed as ecosystem building rather than traditional venture equity. Treat those figures as reported and unconfirmed; they underscore Nvidia’s willingness to underwrite the compute ecosystem instead of turning into a hyperscaler itself.

Strategic implications amid geopolitics

Huang’s remarks matter in a geopolitical context where export controls and US‑China tensions shape who can access top‑end chips and memory. If design, packaging and supply‑chain persuasion are the moat, then policy risks and trade restrictions become material strategic levers — for suppliers, customers and countries racing to build local AI stacks. Huang also stressed that the real operational scarcity for AI expansion is skilled labor — the electricians and data‑center crews — not only extreme UV tools or wafers, a reminder that scaling AI is as much logistical as it is technical.

Whether rivals can close the gap depends on more than a faster chip; it requires software ecosystems, long bilateral supplier commitments and billions in capital. Huang’s message was blunt: compete on software and systems, or risk being the next “electronic garbage.”

AISemiconductors
View original source →