← Back to stories A man in a white t-shirt undergoing a 3D facial scan with red laser lines in a studio.
Photo by cottonbro studio on Pexels
虎嗅 2026-04-08

First wave of ordinary people whose faces were “stolen” by AI short dramas — turned into villains and pushed to the edge

The case

It has been reported that a growing number of ordinary users are discovering their faces repurposed by AI-generated short dramas and cast as villains without consent. Blogger @七海Christ says her face — down to a distinctive mole — was copied into Hongguo Short Dramas (红果短剧)’s production 桃花簪 and turned into an animal‑abusing antagonist. Another user, “白菜汉服妆造,” says an original hanfu photo was almost identically reproduced for a lecherous, greedy villain. No notice. No payment. No outlet for protest. Who do these people turn to?

AI actors and the new production pipeline

At the same time traditional sets are filling with a new kind of performer. Yaoke Media (耀客传媒) has debuted two AI actors and will air its AI drama 秦岭青铜诡事录; Yuxiao Media (聿潇传媒) reportedly signed half a dozen digital likenesses, including variants of online influencers. Directors describe a selection process that looks nothing like casting calls: feed character requirements to a generator, produce dozens of faces, rinse and repeat until one “feels right.” The director G.M told reporters they avoid raw AI close‑ups and cherry‑pick the best of dozens or hundreds of renders. But even that curation can accidentally yield a face real people recognise — and complain about.

Legal pushback and platform limits

Chinese lawyers and courts are already being pulled into the debate. Shanghai Kunlan Law Firm (上海坤澜律师事务所) partner Shi Yexiang told reporters that recognisability is the core test for AI portrait infringement; the Beijing Internet Court (北京互联网法院) has set precedents holding platforms and producers liable when generated faces are highly similar and publicly identifiable. Major studios have begun legal moves — Yang Zi (杨紫)’s team issued a lawyer’s letter this March over unauthorised use — and Yaoke says it will host an industry legal workshop. Still, platforms admit detection is hard: dynamically generated faces change frame‑to‑frame and often slip past automated review.

What’s at stake

This is not only a privacy fight. It threatens livelihoods and trust. Background actors and crew worry about replacement and fee compression; writers and directors fear a shift in creative power from performers to prompt engineers. Abroad, the 2025 controversy around AI persona Tilly Norwood and the US actors’ unions showed how high the stakes can get — it has been reported that Hollywood unions labelled such synthetic stars as trained on stolen performances. In China, ordinary people ask a basic question: if a selfie I posted can be turned into a defamatory role overnight, who protects me — the law, the platforms, or the industry itself? The answer will determine whether AI becomes a tool of creative augmentation or a conveyor belt of unauthorized exploitation.

AI
View original source →