Generative‑AI porn market exposed as China probes prompt ‘recipes’ sold online
What CCTV uncovered
China Central Television (中央电视台, CCTV) has exposed a growing underground market that uses generative AI to produce sexualized and explicit images and videos, it has been reported. Investigative reporting published by ifeng and Times Weekly found merchants openly selling detailed prompt “recipes” — nicknamed “焚诀” — on platforms including Douyin (抖音), Bilibili (哔哩哔哩) and Xianyu (闲鱼). Buyers reportedly pay as little as 9.9–39.9 yuan for step‑by‑step files that describe hairstyle, clothing, posture and even micro‑movements; some sellers translate sensitive terms into English to try to evade automated filters.
How the misuse works — and where safeguards fail
Mainstream Chinese generative models such as Baidu (百度)’s Wenxin Yiyan (文心一言) and other services routinely decline explicit prompts. But CCTV’s probe found that slightly obfuscated or iteratively revised instructions can bypass some filters and yield sexually suggestive images; it has been reported that certain smaller or third‑party AIGC platforms produced full‑nude videos within minutes after investigators pasted purchased prompts. Vendors also advertise “卸甲” (one‑click undress) tutorials and special creator links that they claim can generate explicit deepfakes — a symptom of the cat‑and‑mouse dynamic platforms describe as “black‑market innovation outpacing defenses.”
Commercial scale and the low barrier to harm
Investigators traced an AIGC site developed by an Anhui firm that charges site tokens (RH coins) at economies of scale: CCTV’s tests suggest a few yuan can buy dozens of short explicit clips. It has been reported that some AI photo‑swap tools advertise one‑click conversion of ordinary images into fabricated nude photos and videos. Who is accountable — the user, the seller of prompts, or the platform that hosts the model? Legal experts quoted by the report say generating explicit content with AI can trigger civil, administrative and even criminal liability in China; definitions of “pornographic” content, they note, hinge on exposure and sensory effect, not on whether the subject is a real person.
Why this matters beyond China
This episode highlights a global governance problem: powerful generative models lower the technical barrier to creating intimate image deepfakes, and automated moderation struggles to keep up. As regulators from the EU to the US consider tough new rules on AI and deepfakes — and as China tightens platform controls and pursues legal remedies — platforms, lawmakers and courts will have to answer practical questions about detection, attribution and penalties. Can technological safeguards be built faster than bad actors adapt? For now, investigators warn the arms race is very much alive.