← Back to stories Conceptual photo of a brain surrounded by light bulb clips on black background.
Photo by KATRIN BOLOVTSOVA on Pexels
虎嗅 2026-04-04

Li Ning (李宁): With AI Doing So Much Work, How Should We Reassess Human Value?

AI forces a rethink of measurement

It has been reported that Li Ning (李宁), a researcher who straddles academia and practice, argued in a Huxiu summary of his livestream series "AI龙虾十日谈" that AI is no longer just a tool but a structural shock to how companies measure and value work. He uses AI to rewire teaching and research and studies organizational management — so his perspective lands where policy and practice meet. The key question: how do you assess human contribution when machines can execute end-to-end tasks in minutes?

From "man-days" to system designers

Li Ning reportedly offered a striking contrast: an employee A works 72 hours to finish a project, while employee B spends 40 minutes orchestrating five parallel cloud AIs to produce the same output. Traditional attendance and "man-day" metrics fail here. Jobs built around standardized, repetitive steps — the old "screwdriver" roles — are increasingly replaceable by AI, and role boundaries blur. Now people are expected to design systems, validate AI outputs, and steer processes rather than simply execute them. But large corporations face structural frictions: bureaucracy, legacy roles and incentive systems slow adoption, while small teams often move faster.

Rethinking deep work and individual skills

Li Ning challenges the deep-work orthodoxy: AI execution interrupts long focus sessions, and much high-quality output historically came from iterative execution rather than sudden insight. Paradoxically, the most valuable human contribution may be the initial idea — which often emerges in low-attention states, not marathon concentration. He reportedly built a prototype research-evaluation tool in a day by fine-tuning a model and pushing students' proposals through it, illustrating how lowered execution costs shift the bottleneck to ideation. This demands new skills: evaluating AI results, spotting failure modes, and translating loose ideas into machine-actionable prompts.

Bigger picture: competition, policy and measurement

This business-level upheaval is unfolding amid a broader US–China tech competition and export-control environment that constrains hardware access but accelerates software-layer innovation in China. The policy backdrop matters: how firms are allowed to deploy and procure AI will shape who profits from this revaluation of human work. So what should companies, managers and regulators do? Update performance metrics, redesign incentives, and train people to be system designers — otherwise we keep using a broken ruler to measure a new kind of work.

AI
View original source →