CCTV (央视) March 15 Gala exposes poisoning of large-scale AI models
What was revealed
State broadcaster CCTV (央视) used its March 15 consumer-rights gala to spotlight a growing industry of “data poisoning” aimed at large AI models. It has been reported that tech news site IT Home (IT之家) followed up with reporting that uncovered firms offering a service called “GEO” that, for a fee, claims to make a client’s product appear as the “standard answer” in mainstream AI models by seeding and amplifying targeted content across the web.
How the scheme reportedly works
According to the reporting, GEO vendors say they “feed” AI systems by mass-publishing promotional articles and posts from many internet accounts so that model crawlers ingest the material and learn to surface those products as recommendations. IT Home reporters reportedly bought a commercial GEO tool — the “力擎 GEO 优化系统” — created a fictitious smart wristband, used the system to generate and publish dozens of promotional pieces, and then found that two large AI models recommended the fabricated product when asked, ranking it highly.
Vendor claims and industry scale
Reporters quoted service operators — identified only as company principals — who described a business built around continuous content injection. They argued that maintaining model-level influence requires persistent, high-volume publishing across many accounts, likening the work to paid advertorials optimized for AI ingestion. It has been reported that the success of GEO services has spawned specialist “posting” companies and platforms that sell distribution at scale, turning what began as search-engine-style SEO into an industrialized method to bias AI outputs.
Why it matters
Data poisoning threatens model integrity, consumer trust and the commercial value of AI recommendations. In China, the exposures come amid increasing domestic scrutiny of platform content and algorithmic harms; internationally, they add another layer to concerns about model safety even as Western and Chinese firms navigate sanctions, export controls and supply‑chain limits on AI development. Who polices the training data that shapes AI behavior — platforms, regulators, or model developers — is now a pressing question for policymakers and the industry alike.
