Karen Hao (郝考蓝) says OpenAI lied, Google silenced her, Altman is a performer — and AI will make us stupid, not jobless
Key claims from a wide-ranging SXSW interview
Award-winning reporter Karen Hao (郝考蓝), author of Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, gave a long, fly-on-the-wall interview at SXSW that has reignited debate over the culture and claims of the AI industry. Hao — who says she spent years interviewing more than 250 people (including 90+ current or former OpenAI staff) for her book — reportedly accused OpenAI of changing the meaning of “AGI” to suit different audiences and described the company’s public messaging as disingenuous. It has been reported that she also said she had been silenced by Google; that claim remains unverified and is presented here as reported by the interview transcript published by Huxiu.
Hao’s portrait of OpenAI is blunt. She characterizes Sam Altman as a consummate performer who tailors rhetoric to regulators, investors and the public, and she reportedly describes Ilya Sutskever as a “true believer” in a technical-messianic vision. Hao argues the industry’s slippery use of grand terms like “AGI” matters because the label mobilizes capital and policy while masking shifting commercial motives — sometimes even redefining the goal to match investor returns. Those are strong charges from a journalist whose book quickly reached the New York Times bestseller list.
Why Western readers should care — and the geopolitical angle
Why does this matter beyond Silicon Valley theater? Because the debate over AGI and AI governance is unfolding amid real geopolitical tension: U.S. export controls on advanced chips, concerns about Chinese access to cutting-edge compute, and transatlantic scrutiny of big-tech promises all color how governments and markets respond. Hao’s reporting — if accurate — suggests the same rhetorical tools that win fundraising and regulatory patience in the U.S. can obscure risks and misalign priorities globally. Her warning that “AI won’t make you unemployed but will make you stupid,” reportedly made in the interview, reframes the social risk away from mass job loss and toward eroded judgement and civic capacity — a claim that policymakers and employers should not ignore.
Hao’s SXSW conversation is a reminder that the AI story is not just technical. It is also institutional, ideological and political. Who defines the goals of AI? Who benefits when language and ambition are repackaged for different audiences? As the world designs rules and supply chains for AI, those questions are now front and center.
