← Back to stories Close-up of various microprocessor chips on a blue hexagonal patterned surface, highlighting electronic technology.
Photo by Jonas Svidras on Pexels
凤凰科技 2026-03-16

Paid “GEO” campaigns can push fake products into Chinese AI models, report shows

Lead

It has been reported that commercial services known as GEO (Generative Engine Optimization) can rapidly “poison” large AI models used in China, pushing completely fabricated products into model answers and recommendation lists. A Tech.ifeng investigation showed that, with little cost and a few hours of work, operators can generate and publish dozens of promotional pieces that are then ingested by mainstream models — and the models repeat the false claims as if they were facts.

How the manipulation works

Reporters who tested the service say they purchased a tool marketed as 力擎GEO (Liqing GEO) on Taobao, invented a fictional wearable called “Apollo‑9” and used the software to auto‑generate and post more than a dozen soft‑promotion articles across prepared accounts. Within two hours, multiple models — reportedly including DeepSeek, Doubao (豆包) and Baidu’s Wenxin Yiyan (文心一言) — began recommending the fake product and repeating invented features such as “quantum entanglement sensing” or “no‑blood glucose measurement.” The investigators say the key is volume and diversity of sources so the model’s retrieval and cross‑validation mechanisms treat the material as genuine.

The business of gaming AI

It has been reported that GEO services are sold on tiered packages, from basic annual plans for a few thousand yuan to premium tiers that can generate tens of thousands of articles per year; screenshots cited by the report showed prices from ¥2,980 to ¥16,980 and a top tier claiming 23,040 articles a year. Reportedly, GEO providers pitch to sectors from medical and education to security and appliances, promising clients higher visibility when consumers ask AI assistants for product recommendations. A person identified as a Liqing GEO representative told the reporter these services are popular because they help clients achieve immediate commercial goals — and acknowledged the practice is ethically dubious.

Why this matters — and what comes next

The episode highlights a shift from traditional SEO to “AI SEO,” and the attendant risks when large language and retrieval models rely heavily on web‑published content without robust provenance checks. False medical claims and exaggerated product features are not just reputation problems; they can cause real harm. Who polices the evidence chain — platform operators, model makers like Baidu (百度), or regulators such as China’s Cyberspace Administration — remains an open question. With regulators in China already tightening rules on online content and with global debate about AI safety intensifying, the report raises urgent questions: can current safeguards keep pace with commodified manipulation techniques, and what enforcement will follow?

AISmartphonesRobotics
View original source →