← Back to stories Vibrant abstract 3D render featuring colorful coral-like structures against a grid background.
Photo by Google DeepMind on Pexels
凤凰科技 2026-04-06

Yi Yangqianxi Draws a Red Line for ByteDance

What happened

ByteDance (字节跳动)’s short‑drama app Hongguo Short Drama (红果短剧) pulled an AI‑generated series titled Taohuazhan after it was reportedly found to have used AI face‑swap on ordinary people and recast them as villains. At almost the same time, the studio of pop star and actor Yi Yangqianxi (易烊千玺) issued a statement saying an AI‑generated show had used his likeness without authorization; those videos were also reportedly removed. On the surface two takedowns; together they look like the point where AI content met copyright at scale.

Why this matters

This is not a niche platform problem. Micro‑short dramas in China have exploded: analysts put the 2025 market at 677.9 billion yuan, and regulators say nearly 696 million people watched short dramas by mid‑2025 — more than half of Chinese netizens. Hongguo itself reached over 100 million daily active users in January 2026 and approaches 300 million monthly users, making it one of ByteDance’s (字节跳动) fastest‑growing distribution channels. When an app that large starts serving potentially unverified AI‑generated likenesses, the consequences ripple across the industry.

The structural fault line

AI tools let tiny teams mass‑produce scripts, imagery and finished episodes, and they can do it cheaply and fast. That converts infringement from scattered incidents into a production‑scale problem. Platforms have responded with familiar playbooks — 72‑hour review windows, source checks and demands for compliance proof — but those measures assume makers can actually document lawful training data and rights. Often they cannot. Who trained the model on whose images? Were the datasets authorized? These questions are frequently unresolved. The performers’ committee of the China Radio and Television Association issued a stern statement on April 2 banning unauthorized capture and synthesis of actors’ images and voices, underscoring that this is now an industry‑level crisis.

Legal and geopolitical stakes

The row also ties into a wider, international copyright battle. Disney reportedly sent ByteDance a cease‑and‑desist over its Seedance 2.0 video model in February, arguing the model was trained on protected content — similar letters have targeted Midjourney, Google and others. The Yi incident shifts the focal point from what models learned to what platforms distribute. Can platforms remain “neutral tools” when AI makes infringement mass‑manufacturable? If regulators or rights holders force platforms to police provenance, companies like ByteDance (字节跳动) face a stark choice: invest heavily in upstream tracing and verification infrastructure, or accept mounting legal and regulatory risk. Which path will the industry take — and who pays for the fix?

AI
View original source →