← Back to stories Colleagues engaged in a collaborative business meeting around a table in a modern office setting.
Photo by RDNE Stock project on Pexels
虎嗅 2026-03-12

After Breaking with Zuckerberg, AI Father LeCun Returns with $1 Billion and a Crazy Bet

The pivot and the money

Yann LeCun — the Turing Award winner and one of deep learning’s founding figures — has returned to the startup stage. His Paris-based company AMI Labs announced a $1.03 billion seed round, valuing the firm at a reported $3.5 billion pre-money and making it, by LeCun’s claim, the largest seed round in European history. Lead investors include Cathay Innovation, Greycroft, Hiro Capital and HV Capital; it has been reported that high-profile backers from the U.S. tech elite — reportedly including Jeff Bezos, Nvidia, Eric Schmidt and Mark Cuban — lined up behind the deal.

The break with Meta

This is not a conventional founder story. LeCun spent 12 years as Meta’s (formerly Facebook) chief AI scientist and founded FAIR, the lab that helped popularize many foundational open-source models. He left Meta in late 2025 after public disagreements over direction — notably criticizing the company’s pivot to large language models (LLMs) and the appointment of Alexandr Wang to lead a new “superintelligence” lab — and has been explicit that he thinks the current LLM craze is a dead end. He has said Meta might even be a customer rather than a rival: AMI will focus on physical-world understanding, while Meta focuses on generative text.

The bet: world models and JEPA

LeCun’s wager is technical and bold. Rather than chasing bigger LLMs that “predict the next word,” AMI Labs is building “world models” that predict how the world evolves from video, audio and sensor streams. He describes a Joint Embedding Predictive Architecture (JEPA) that learns abstract representations and predicts outcomes in that latent space — the idea being to capture commonsense physics (a ball will fall) without modeling every pixel. LeCun says he already has prototype systems demonstrating a form of commonsense; he also argued in Davos that any credible path to human-level embodied intelligence will need this approach, not just ever-larger text models. He gives a realistic timeline: non‑zero odds of major progress within a decade, but not the 1–2 year AGI fireworks some predict.

Why it matters — for AI and geopolitics

The raise is happening in a crowded, fast-shifting market. Stanford’s Fei‑Fei Li (李飞飞) recently closed a separate $1 billion round for World Labs; together the world‑model field reportedly attracted more than $2 billion in a month. That matters for global talent flows — AMI says it has hired researchers from OpenAI, DeepMind and xAI — and for geopolitical dynamics: export controls on advanced chips, U.S.–China technology tensions and national strategies around robotics and autonomy will shape who can train and deploy these systems at scale. China’s vibrant robotics scene and startups focused on humanoid machines add another dimension: many Beijing- and Shenzhen-based teams are already chasing embodied AI use cases. Will LeCun’s world models deliver the predictive, physical intuition that LLMs lack? That question will define the next phase of AI competition — scientific, commercial and geopolitical.

AIResearch
View original source →