GEO “AI‑poisoning” services still for sale after CCTV exposé, reporters find
What reporters found
It has been reported that a follow‑up investigation by Kechuangban Daily (科创板日报) — widely circulated by Phoenix Tech (凤凰网/ifeng) — found marketplaces such as Xianyu have removed blunt keywords like “GEO optimization,” but sellers are slipping back in under more covert tags such as “engine optimization.” Reportedly, vendors are offering two models: a SaaS product (about RMB 398/month or 1,980/year) and a higher‑priced managed service (about RMB 3,980/quarter to 9,800/year), and claiming coverage across domestic and international AI platforms including Doubao (豆包), DeepSeek, Yuanbao (元宝), ChatGPT and Qianwen (千问). CCTV’s 3·15 consumer program earlier exposed the full chain of this “GEO” black industry — showing how manufactured articles are quickly ingested by large models and then regurgitated as authoritative answers.
How the manipulation works — and why it matters
So how do they do it? According to the leaked 30‑page “GEO optimization” manual obtained by reporters, operators run “AI distillation”: they expand a target keyword into thousands of Q&A prompts, auto‑generate articles, and publish them across networks of self‑media on platforms such as NetEase (网易), Sohu (搜狐), Baijiahao (百家号), Zhihu (知乎) and Xiaohongshu (小红书). RAG — retrieval‑augmented generation — is the weak link. Large models fetch internet evidence before answering; when that evidence is a coordinated swarm of promotional content, the model’s “answer” becomes the marketing. Fast and Slow Thinking Institute (快思慢想研究院) director Tian Feng (田丰) has warned this is not mere false advertising but “cognitive manipulation” that corrodes the core asset of AI commerce: user trust.
Proposed responses and geopolitics
Experts argue mitigation requires a three‑pronged “tech‑ecosystem‑legal” defense: rebuild provenance and source‑weighting, force citations in model outputs, deploy AI‑against‑AI cleaning models, and legally classify large‑scale data poisoning as a new form of cyberattack or unfair competition to raise enforcement costs. Why does this matter beyond China? RAG architectures are widely used internationally — from OpenAI’s ChatGPT to many commercial systems — so the vulnerabilities are cross‑border and complicate regulatory and trade debates over AI safety and platform accountability. For ordinary users, specialists urge skepticism: verify product claims through multiple trusted channels and treat a model’s “standard answer” as a starting point, not gospel.
