← Back to stories Close-up of sleek security cameras ensuring safety and privacy in indoor settings.
Photo by Jakub Zerdzicki on Pexels
虎嗅 2026-03-16

After AI "Poisoning" Was Exposed, Are We—Who Let AI Decide Everything—Still "Free"?

CCTV exposé and the rise of GEO

CCTV investigators have exposed a nascent industry that pays to shape the answers served by large AI models. It has been reported that a service marketed as GEO (Generative Engine Optimization) openly offers to “feed” or even “poison” model inputs so a client’s product becomes the AI’s default answer; a man filmed by reporters reportedly boasted that for a fraction of advertising budgets, he could make brands “top of mind” for AI tools. The practice is pitched as the successor to SEO — the familiar art of ranking in search engines — but aimed at models such as ChatGPT (by OpenAI), Perplexity, and Chinese systems now integrated into domestic players’ services.

What GEO means for users and brands

GEO leverages the fact that conversational and generative models synthesize answers instead of returning click lists, and many users treat those answers as objective. That trust creates a powerful incentive for brands, universities, and recruiters to shape model inputs and citations. It has been reported that long-term estimates put the potential GEO market at roughly $40–50 billion, mirroring the scale of today’s SEO industry as AI search traffic climbs. For Western readers unfamiliar with China’s landscape: Chinese firms such as Alibaba (阿里巴巴), Quark (夸克) and Tencent (腾讯) are already embedding AI assistants into services that influence choices from shopping to college admissions, so the stakes for reputational manipulation are immediate.

Governance, trust and geopolitics

Why worry? Because when an AI answer carries the weight of authority, subtle manipulation can rewire consumer choice without overt ads. Who verifies the veracity of the “standard answer”? If models can be gamed, consumers lose a layer of agency — and regulators gain a new headache. The issue also intersects with global tech tensions: as Western governments tighten scrutiny of AI systems and impose export controls on chips and model components, countries are simultaneously racing to build domestic model ecosystems, raising questions about standards, transparency and cross-border trust. The Waze analogy is apt: crowd-sourced navigation works until incentives turn the crowd into a market-driven signal. Regulators, platforms and brands all now face the choice: influence the AI, or be influenced by it.

AI
View original source →