New data from Anthropic suggests AI hits young workers and highly educated women hardest
Anthropic’s real‑use index reframes the AI‑jobs debate
It has been reported that Anthropic, one of the leading U.S. AI labs, has published an "Anthropic Economic Index" that measures how large language models (LLMs) like Claude are actually used in workplace workflows — not just what they could theoretically do. Previous studies often broke jobs into tasks and asked whether AI could perform them. Anthropic’s team went further: they matched real Claude API logs to the U.S. O*NET job‑task database, weighted tasks by the time they occupy in a role, and distinguished fully automated flows from human‑assisted ones. The result: the field of possible automation (theory) is much larger than the field of observed automation (practice).
Who is exposed — and why this matters
The index finds that programmers face the highest observed exposure (about 74.5% of their tasks covered), followed by customer‑service, data‑entry and many data‑analysis roles. By contrast, roughly 30% of occupations — cooks, motorcycle mechanics, lifeguards, bartenders, dishwashers — show essentially zero AI exposure because their work is physical or context‑bound and leaves little trace in language‑model usage logs. Reportedly, Claude currently covers about 33% of tasks in computer and mathematics roles in actual use, far below a prior theoretical estimate of ~94% from an OpenAI study published in Science.
Winners, losers and the gender/education twist
The most exposed occupations are overwhelmingly information‑processing, office and white‑collar jobs — groups that tend to be more highly educated and include a higher share of women and Asian workers. Anthropic’s analysis shows the female share is about 16 percentage points higher in the highest‑exposure quartile than in the lowest. Similarly, 17.4% of workers in high‑exposure jobs hold postgraduate degrees versus 4.5% in low‑exposure ones. What does that mean? For many high‑education roles the effect will be polarizing: some occupations “deskill” as AI takes over expert tasks (letting lower‑credential workers perform them), while others “upskill” as remaining work concentrates on high‑judgment, high‑stakes activities that only a smaller elite can do.
Geopolitics and policy implications
This new, usage‑based perspective has clear policy implications. Automation risk is concentrated in skilled white‑collar work, so retraining and social policy may need to target different groups than earlier automation debates assumed. And in a geopolitical context where the U.S. and China compete over models, chips and software ecosystems, trade measures and export controls on advanced semiconductors could shape how quickly and where these workplace changes play out. Who benefits — employers, elite specialists, or displaced workers — depends on corporate adoption, regulation, and whether reskilling programs reach those most exposed.
