Has AI "stolen" voices? Top-tier voice actors collectively declare war on AI infringement
Voice actors strike back
It has been reported that since March a coalition of high-profile Chinese dubbing studios — Bian Jiang Studio (边江工作室), 729 Sound Workshop (729声工场) and Yinxiong Lianmeng (音熊联萌) — and their rostered talent have begun a coordinated campaign against unauthorized AI voice cloning. Leading performers such as Shi Zekun (史泽鲲) have reportedly retained lawyers and opened inboxes to collect evidence; other well-known names including Ji Guanlin (季冠霖) and Lv Yanting (吕艳婷) have issued public statements accusing platforms and creators of cloning and misusing their timbres without consent. The dispute is not limited to commercial mashups: even documentary narrator Li Lihong’s (李立宏) voice has been repeatedly—and allegedly—repurposed across food and short-video content.
Scale and patterns: why does it keep happening?
Why does the problem persist? Short answer: low cost and high availability. Deep synthesis tools for text-to-speech and voice conversion are widely available and often require only seconds of sample audio to generate a convincing clone. It has been reported that China’s first nationally publicized AI voice-infringement case in 2024 involved a defendant selling a voice model inside an app without permission. Many smaller creators and some apps have taken down or voluntarily removed offending content after backlash; others keep operating in niches where infringement goes under the radar. Non-commercial fan edits—like viral AI covers of singer Stefanie Sun—have been tolerated by some rights holders, but actors stress that using cloned voices in profit-making productions is a different matter.
Legal and technical barriers to enforcement
Legal experts warn enforcement is hard. Ye Junxi (叶俊希), who helped draft China’s guidelines on generative-AI data use, told reporters that three features make pursuit difficult: the low technical barrier and abundance of open-source models; a fragmented, opaque infringement chain where it’s hard to identify who “fed” the original samples; and deliberate audio tweaks or multi-source blending intended to obscure provenance. Courts decide infringement on factors such as perceptual similarity, public recognition and identity markers, and claims can invoke personality rights, copyright, data-protection rules or unfair competition. Platforms’ “safe-harbor” defenses further complicate matters if operators fail to act once they know—or should know—about infringement.
What comes next?
Collective legal action and public shaming are the immediate tactics of choice for the dubbing community, but costs remain high and remedies patchy. Will platforms, regulators and creators move faster than the open-source tide? Amid global debates over AI regulation and cross-border technology governance, this campaign in China is a test case: can law, contracts and platform controls catch up to tools that make voice cloning trivial? Voice actors say they will press on. The question now is whether that pressure can force meaningful, enforceable guardrails before the next wave of synthetic voices goes live.
