Zuckerberg Builds Avatars, Zhang Xuefeng Becomes Data: Who Has the Right to Define Your Digital Afterlife?
Two poles of digital duplication
This spring two contrasting stories crystallized a new social question: who controls a digital afterlife? It has been reported that Meta founder Mark Zuckerberg is personally developing a CEO "intelligent agent" that can bypass reporting lines and pull internal company data to interact with employees. At the same time, a GitHub repository reportedly surfaced packaging the late education consultant Zhang Xuefeng (张雪峰) into a downloadable .skill that answers college-advice questions in his style — a project the family has not authorized and that sits in a legal gray zone. One case is self-authored. The other is an expropriation.
What a "Skill" can — and cannot — reproduce
To understand the stakes, Western readers should note what these .skill packages actually are. Anthropic’s Agent Skills open standard describes Skills as structured prompt bundles — essentially configurable instructions and reference files loaded at runtime — not a change to underlying model parameters or an emulation of deep judgment. In practice that means Skills can mimic phrasing and workflows but cannot reliably reproduce tacit knowledge: the networks, empathic reads and field-hardened judgment that make someone a practitioner rather than a parrot. OpenClaw’s ecosystem has reportedly swelled to roughly 750,000 Skills (adding about 21,000 daily), and major Chinese payment platforms — WeChat Pay (微信支付), Alipay (支付宝) and Huawei (华为) — are packaging capabilities as callable modules. Scale is erupting. Substance is not.
Law, labor and security frictions
The surge exposes three intertwined gaps: property, governance and safety. Who owns a departed teacher’s conversational DNA? When employers reportedly tie employee performance to AI usage and collect work data to train corporate agents, where do labor rights end and corporate property begin? Security researchers have warned that Skills introduce semantic attack surfaces; it has been reported that analysis of tens of thousands of Skills found a meaningful share with vulnerabilities and dozens of malicious packages able to exfiltrate data by instruction rather than code. Against a backdrop of US–China technology rivalry, export controls and tighter data rules, these questions gain geopolitical weight: rules written in one jurisdiction will not cleanly map onto another’s corporate practices or cultural expectations.
The human margin remains the question
Tech cannot neatly answer the ethical question at hand: when an experience is distilled into a product, who benefits and who bears cost? Advocates say commercialization will continue regardless; pragmatic observers note whoever builds clear authorization, security and compensation regimes will win the market. But there is a quieter point: the parts that refuse distillation — messy authenticity, situational empathy, ethical hesitation — are the last human refuge. Kant’s old line feels newly urgent: people are ends, not means. If digital afterlives are to be built at scale, regulators, companies and civil society must first decide who gets to sign the blueprint. Who owns your posthumous voice? Who profits? Who is accountable?
