OpenAI: ChatGPT ads will not be launched globally for the time being
Lead: safety and trust trump fast ad revenue
OpenAI said it will not roll ChatGPT ads out globally for the time being, a move that underscores growing concern about the integrity of AI-generated answers and the commercial systems that feed them. Short-term ad revenue looks attractive. But can an advertising model survive in an environment where third parties can deliberately seed AI training data? That question is now pressing.
Reported data‑poisoning services in China
It has been reported that a Chinese service branded as the "Liqing (力擎) GEO optimization system" — run by Beijing Lisi Cultural Media Co. (北京力思文化传媒有限公司) — advertises paid campaigns that deliberately plant content across the internet so major Chinese large language models will surface a client’s product or advertisement as the “standard answer.” Reportedly, the operator told reporters these GEO services work by mass “posting” to accounts so that crawlers and models ingest the manipulated sources. Tianyancha corporate records show the company was founded in 2018, has applied for software copyright for a “media posting and management platform,” and recently sought trademarks including “力擎” and “壹豹客.” Screenshots accompanying reports name models targeted for indexing such as Wenxin Yiyan (文心一言, Baidu), Tongyi Qianwen (通义千问, Alibaba), and others.
What this means for consumers, markets and firms like OpenAI
If third parties can reliably make low‑quality or misleading content look like the consensual output of an AI model, consumer trust collapses. It has been reported that GEO‑style “data poisoning” can lead to consumers receiving falsified product claims or dangerous advice disguised with an AI badge of authority. For markets, the result is distorted competition: companies with the biggest “poisoning” budgets, not the best products, could win visibility. For foreign firms such as OpenAI, and for regulators in the US and EU, that risk complicates decisions about monetization and global rollout amid heightened scrutiny over data integrity, cross‑border data flows, and trade‑policy tensions.
Broader implications: search, regulation and the race for trustworthy AI
If the public loses faith in model answers, traditional search engines might regain leverage by selling “verified” results. Will the industry pivot to verified-source layers or stricter provenance and data‑auditing requirements? Policymakers and platform owners now face a choice: accelerate technical and regulatory defenses against manipulation, or accept that ad‑driven AI might import the same distortions that already plague the open web. OpenAI’s temporary pause on global ad expansion signals that at least some major vendors are leaning toward caution.
