← Back to stories Smartwatch, ring tracker, and smartphone displaying fitness data on a gray surface.
Photo by Andrey Matveev on Pexels
钛媒体 2026-03-16

Poisoning AI: A Trust War over the 'Final Answer'

A fake product exposes a new exploitation

A dozen near-identical web articles pushed a non-existent wearable called Apollo-9 into the recommendation lists of several AI assistants after a recent March 15 consumer-rights broadcast in China. The program — China’s high-profile 315 consumer rights show — highlighted that Apollo-9 had no manufacturer, no users and existed only as copies of advertorial text on the open web. Reportedly the content was produced by an automated toolkit now being called a GEO (Generative Engine Optimization) system — operators have bluntly described the tactic as “poisoning AI.”

Why this is worse than the old SEO game

Search engines already suffered a century’s worth of gaming: SEO, link farms, paid placement. In China that produced a large SEO economy around Baidu (百度), where companies routinely budgeted for paid rankings and soft‑content. But large language models do more than point users to pages; they synthesize and present a single “final answer.” If many webpages repeat the same claim, the model treats repetition as consensus and may present falsehoods as fact. That makes manipulation both easier and more dangerous: probability can masquerade as truth.

A technical and strategic threat

Researchers call the larger risk “model collapse”: models increasingly trained on polluted, machine‑generated or self-referential text can learn their own echoes instead of independent facts. It has been reported that firms such as OpenAI and Anthropic have raised concerns about the shrinking supply of high‑quality training data. With advanced chip exports and other cross‑border technology controls tightening, the AI race is shifting: it’s not only about compute and scale, but about provenance and data hygiene.

The next front is trust, not just performance

The incident is small but illustrative. Who decides the answer matters as much as who ranks first. In an era when users accept a single synthesized reply, influence equals authority — and authority can be manufactured. Expect the industry to pivot from raw model metrics toward knowledge‑system defenses: provenance, citation, retrieval‑augmented generation and tougher content governance. If that battle is lost, the next war in tech won’t be about chips or models — it will be about who gets to define what is true.

AIRobotics
View original source →