← Back to stories Dynamic street performers showcasing impressive acrobatics in vibrant Barcelona.
Photo by Jo Kassis on Pexels
IT之家 2026-04-10

Zhang Xuefeng.skill project goes viral, sparks ethics row as “deceased thinking” reportedly distilled

The Zhang Xuefeng (张雪峰).skill project has exploded across Chinese social platforms and triggered a furious debate about AI personification and consent. It has been reported that the project — presented as a conversational "skill" that mimics public figures — led to instances where models were trained or tuned to reproduce the mental patterns of deceased individuals, a move critics call ethically fraught and potentially exploitative.

What happened

Reportedly, users began sharing interactions that felt uncannily like the late or absent voices of real people. That sharing turned into a viral wave. Supporters argue such skills help memorialize and educate; opponents warn about commodifying grief, erasing consent and flattening a complex human mind into predictive text. Platforms hosting the skill have faced pressure to clarify content policies and to remove or limit access while the controversy unfolds.

Why it matters

This episode lands amid a broader scramble in China to define rules for generative AI. Regulators in Beijing have already signaled tighter oversight over synthetic media and personal data. Internationally, too, there is growing unease: as export controls and sanctions reshape AI supply chains, companies are leaning more on software innovations that raise fresh ethical questions. Who owns a public persona after death? Who decides whether a mind can be “distilled” into an algorithm?

Platforms, creators and regulators now face immediate choices. It has been reported that some services are reviewing takedown and consent mechanisms. The bigger debate remains: do we gain by preserving voices in code, or do we lose something irretrievable when the dead are turned into consumable services?

Policy
View original source →