The elephant in the room: most of AI’s output is not adopted by users and is wasted
Low adoption, high output
It has been reported that despite high uptake of AI tools among developers, most AI-generated output is never adopted by humans — a finding that threatens claims about AI-driven productivity gains. Several company surveys and industry reports converge on a rough ballpark: adoption rates of AI-generated code tend to sit around 20%. For example, an internal ZoomInfo survey reportedly found an average GitHub Copilot code adoption rate of about 20%; SoftDocs (SoftDoc) reported a 13–21% acceptance range in 1H25; and DX AI’s Q4 impact report suggested roughly 22% of merged code was AI-written. Taken together, the public data point to a surprisingly low “adoption rate” — the share of AI output that developers actually keep and ship.
Why the number matters
Adoption rate is a closer proxy for real-world ROI than simple usage metrics. If an AI assistant generates 1,000 lines of code but only 200 are used, what did the team actually gain? It has been reported that industry practitioners and vendors alike find this metric hard to measure, and that enthusiasm, instant feedback loops and “slot‑machine” engagement make users overvalue interaction volume versus useful output. An anonymous AI product designer quoted in the reporting warned that applause metrics (likes, upvotes) are poor substitutes for measuring value, and that low adoption masks wasted compute, time, and cognitive effort.
Mixed signals from inside companies
Not all teams experience the same ratio. It has been reported that some engineers (one quoted pseudonymously from a major Chinese tech firm) claim near‑100% adoption when they use the latest, costly models; meanwhile White Whale Open Source (白鲸开源) CEO Guo Wei (郭炜) shared internal data showing adoption varies sharply by task complexity — near 100% for simple Q&A and low‑complexity coding, about 80% for medium tasks, and 50–60% for highly complex engineering scenarios. Those differences underscore that “adopted” must be defined carefully (tokens, lines, modules, or functional delivery) and that adoption depends on model quality, engineering processes and organizational willingness to pay for the best models.
Bigger picture: product design, cost and geopolitics
The implications go beyond product metrics. Low adoption can create false productivity narratives, inflate costs, and waste compute — problems that will matter more as enterprises factor AI into budgets and compliance. Geopolitics also looms: access to leading models and chips — affected by export controls and trade policy — shapes which firms can buy high‑quality, stable models that produce adoptable output. If adoption is the real test of AI’s value, the industry needs standardized measurement and product designs that prioritize usefulness over novelty. Will companies start measuring what truly matters — or keep celebrating activity while ignoring the elephant in the room?
