After Lin Junyang left, Wu Yongming urgently ramps up AI training
It has been reported that after Lin Junyang’s departure, Wu Yongming (吴永明) has ordered an urgent ramp-up of internal AI training and safety work across his organization to align with fresh national rules. The move follows a joint release by the Cyberspace Administration of China (国家网信办) and four other departments of the "Interim Measures for Management of Anthropomorphic Interactive AI Services," a rule set scheduled to take effect on July 15, 2026. Companies now face clearer obligations when AI systems interact with users in emotionally charged situations — and faster enforcement timelines.
What the new rules require
The Interim Measures require that AI service providers who detect users exhibiting extreme emotions must first attempt to calm and encourage the user to seek professional help, and, if situations are severe, promptly notify the user’s guardian or emergency contact. Enforcement is graduated: warnings, public criticism, ordered rectifications, suspension of services and, for persistent or serious violations, fines between RMB 10,000 and 100,000; cases that cause harm to life or health can trigger fines of RMB 100,000–200,000. It has been reported that certain triggers will force immediate safety assessments and submission of evaluation reports to provincial-level internet authorities.
Why this matters — for firms and geopolitics
Why the sudden scramble? Because the measures raise both technical and legal stakes for conversational and "persona-style" AI features that are popular with consumers but are now explicitly regulated. For Western readers: China’s tighter, prescriptive approach contrasts with the patchwork of industry-led guidance and litigation seen in the U.S. and Europe, and comes as geopolitical frictions — export controls and sanctions around advanced computing and AI chips — already complicate Chinese firms’ product road maps. Can companies retool models and retrain staff quickly enough to avoid penalties and retain market features users expect? That is the question industry watchers will be watching closely.
Expect firms to publish compliance road maps and to emphasize safety-by-design in their next product updates. It has been reported that Wu’s rapid training push is already underway; regulatory enforcement later this year will reveal whether the measures prompt meaningful changes in how Chinese AI platforms handle vulnerable users.
