Poisoning AI, RMB 2.9 Billion a Year: The Business the 315 Program Didn't Tell You About
What 315 found — and how GEO works
It has been reported that China’s 315 consumer‑rights programme showed how a domestic firm, Lisi Cultural Media (力思文化传媒), used a “LiQing GEO Optimization System (力擎GEO优化系统)” to push paid content into the information pool that online AIs read. The programme’s experiment — in which a wholly fictional smart band called “Apollo‑9” was invented and a dozen promotional posts were seeded across platforms — reportedly produced fast results: within hours several mainstream models began recommending the fake product as if it were an objective pick. How can AI be so easily fooled? Because modern chat and recommendation models often combine pretrained knowledge with real‑time web retrieval, and GEO (Generative Engine Optimization) simply floods that retrieval layer with high‑volume, paid content.
The business behind the manipulation
GEO is essentially SEO for generative models. It has been reported that the LiQing system claimed to cover eight major models — including Doubao (豆包) from ByteDance (字节跳动), Wenxin Yiyan (文心一言) from Baidu (百度), Tongyi Qianwen (通义千问) from Alibaba (阿里巴巴) and Yuanbao (元宝) from Tencent (腾讯) — and to push content to dozens of publishing platforms automatically. The industry now has multiple layers: tool developers selling automated copy‑generation and distribution (with packages reportedly ranging from a few thousand to tens of thousands of RMB a year), third‑party posting farms that maintain hundreds of accounts to publish paid pieces, and downstream clients who view GEO as a far cheaper route to “organic” AI recommendations than traditional advertising. It has been reported that the domestic GEO market could reach about RMB 2.9 billion (≈USD 400 million) by 2025 — and that public listed marketing firms have already seen market moves tied to GEO narratives.
Why this matters — trust, regulation and geopolitics
This is not just a consumer‑protection stunt. GEO threatens the core commercial assumption of AI search and recommendation: that the model’s answers are impartial summaries of broad evidence. If answers can be bought or weaponized — for promotion or competitive sabotage — user trust erodes, and the AI‑as‑portal business model falters. The parallels with past search‑engine trust crises, notably paid ranking scandals, are obvious. ByteDance (字节跳动) said Doubao was “not affected”, Alibaba (阿里巴巴) said core judgments in Tongyi Qianwen were “not disturbed”, and DeepSeek reportedly acknowledged possible impacts — but none of those rebuttals address a structural problem: generative models read the web, and the web can be mass‑polluted. In the context of the global AI race and heightened scrutiny of tech governance amid US‑China tensions and export controls, the episode raises regulatory and strategic questions: who audits the training and retrieval pipelines, and how will platforms prevent monetized misinformation from becoming the next normal?
