“Peach Blossom Hairpin” Fuels a Bigger Question: Who Owns Faces in the AI Era?
Ordinary people, not stars, are now the prime victims
An AI short drama titled Peach Blossom Hairpin (桃花簪) was pulled offline after creators of two social-media personas accused the show of “stealing faces.” It has been reported that Hanfu stylist “Baicai Hanfu Makeup” (白菜汉服妆造) flagged a villain whose body, face, green Hanfu and accessories mirrored a photo they posted on Xiaohongshu (小红书). On the same day commercial model “Qihai Christ” (七海Christ) said her images were repurposed to create an abusive, animal‑hating caricature that grotesquely distorted her image. Hongguo platform (红果平台) says it completed a 72‑hour review, found no evidence the producer had lawful rights to the materials, and removed the series while suspending uploads from the producer for 15 days.
Enforcement lags behind a fast‑moving technology
What makes this case toxic is not only automated face‑reuse but the shift of targets from well‑resourced celebrities to vulnerable ordinary users. It has been reported that mainstream AI training sets draw heavily on user‑uploaded social content, amplifying the chance that generated characters will resemble real people. Lawyers tell reporters that Chinese judicial practice increasingly treats “identifiability” as the threshold for infringement: if a role is recognizable as a real person, courts are likely to find a violation. But discovery, evidence‑collection and litigation remain costly and slow for private individuals — and platforms and producers can re‑upload under different guises.
Industry experiments and partial fixes
Some studios are already experimenting with pragmatic workarounds. Yuzhi Film (聿至影业), an AI‑focused studio behind titles like 《揭秘:749》, has adopted a hybrid model: a handful of human actors sign licenses to create authorized digital doubles, while AI generates background roles. Founder Lin Bolun (林渤淪) argues that motion‑capture and “performance migration” — recording a real actor’s gestures and transferring them to AI characters — can preserve creative and legal provenance, and that training bespoke base models from authorized actors reduces collision risk. Still, he admits coincidences are inevitable: even independently generated NPCs can resemble real acquaintances.
Regulation, platforms and public trust
Regulators and platforms are responding. The China Broadcasting and Television Social Organizations Federation Actors Committee (中国广播电视社会组织联合会演员委员会) has warned against unauthorized face‑and‑voice cloning and launched plans for continuous monitoring. Xiaohongshu (小红书) announced tightened governance of “AI‑remixed” videos, promising improved detection and strategy‑word controls. But a deeper question remains: can AI actors ever earn audience trust when the economic logic of “lower cost, easier production” continually collides with privacy and reputational harms? If producers cannot credibly demonstrate provenance — from prompt to final render — public resistance will likely harden rather than fade.
Who pays the verification cost — the platform, the studio, or the person whose face was used — will decide whether AI actors become a legally and socially sustainable layer in China’s media ecosystem, or a fast‑growing liability that accelerates calls for stricter controls at home and scrutiny abroad amid an already tense global conversation about AI, data governance and cross‑border technological competition.
