← Back to stories A scientist interacts with a robot helper, demonstrating modern technological innovation.
Photo by Pavel Danilyuk on Pexels
凤凰科技 2026-04-01

China’s AI industry rolls out strict self-regulation to curb forced features and protect data

It has been reported that 18 domestic large-model makers and 233 upstream and downstream firms jointly released the "New‑generation Artificial Intelligence Industry Functional Specification Management Initiative and Implementation Requirements" (《新一代人工智能产业功能规范管理倡议与实施要求》), according to Phoenix New Media (凤凰网). The voluntary code targets familiar consumer complaints — forced AI bundling, opaque fees, hidden switches, and data‑security risks — and sets out what proponents call the industry’s first comprehensive self‑remediation plan.

What the rules require

Key measures are blunt and practical. All software and smart devices must include a one‑click, clearly visible AI off switch; once switched off, AI must stop running in the background, halt data collection and free device resources. Hidden multi‑layer toggles and deceptive automatic restart schemes (seven‑ or 30‑day reactivation) are explicitly banned. Vendors selling fully paid‑for hardware must not add undisclosed secondary software paywalls; native device features — casting, resolution control, basic AI assistants — cannot be carved out as extra charges. Televisions and cars, singled out as high‑risk categories, face strict measures: no startup ads, no standby pop‑ups, a 10‑second requirement to reach live content on power‑up, and limits on layered membership fees across the same brand or platform.

Data, copyright and limits on AIGC

The initiative draws a red line around personal data: user privacy data must not be used for model training without consent, AI should not secretly record or “learn” from user information, and privacy takes precedence when AI capability upgrades clash with personal‑data protections. On AIGC (AI‑generated content), the rules frame models as creative aids and assign copyright to the human creator — but they also reportedly restrict models from producing celebrity likenesses, well‑known IP or certain character art to reduce infringement risk, a move that may constrain some generative use cases.

Why this matters — and what’s next

Why has industry taken this step now? Partly to pre‑empt heavier state regulation and to reassure wary consumers. It has been reported that some analysts called the document the strictest self‑discipline standard in China’s AI sector to date, arguing that, if implemented, it could eliminate many abusive commercial practices and reduce security blind spots. For Western readers unfamiliar with China’s tech scene: Beijing has already tightened AI oversight, and U.S. export controls and semiconductor restrictions have accelerated China’s push for domestic AI capacity. Whether this voluntary framework will be enforced consistently — and how it will interact with formal government rules and cross‑border trade pressures — remains unclear.

AISpace
View original source →