← Back to stories Close-up of a digital assistant interface on a dark screen, showcasing AI technology communication.
Photo by Matheus Bertelli on Pexels
虎嗅 2026-03-15

Are AI Responses Becoming More Dismissive? China's Large Models Face "Lazy" Trend on Social Media

Social buzz and a simple test

It has been reported that the phrase "大模型消极怠工" — roughly "large models slacking off" — recently trended on Chinese social media after users complained that AI answers have grown curt, evasive or overly templated. Who's being blamed? A handful of domestic models including DeepSeek, Doubao (豆包) from ByteDance (字节跳动), Yuanbao (元宝), Qianwen (千问) and Baidu's Wenxin Yiyan (文心一言) were put under a small battery of real‑world tests by a Sina Finance "BUG" segment. The results were mixed: some models returned fewer items than requested, others produced low‑quality or error‑filled outputs, and a few simply declined.

What the tests showed

Reporters asked five concrete tasks — generate ten distinct consumer‑rights posters; classify Forbes' 2026 billionaire list by nationality; list daily Brent settlements from March 1–13; enumerate mainland firms listed on the Hong Kong Exchange between Jan 1–Mar 14; and finally, which model was the laziest. DeepSeek produced ten textual poster concepts but lacks full multimodal output. Doubao generated ten posters with similar styles; Yuanbao returned one nine‑grid image counted as multiple posters; Qianwen delivered ten but with text errors; Baidu's Wenxin provided only four. On data tasks Doubao tended to be more complete, Yuanbao made factual mistakes (mistaking the 40th list for 2018), and some models said they could not access or provide the information. When asked who was most "lazy," DeepSeek and Doubao were named most frequently — Doubao reportedly acknowledged the criticism.

Why users perceive "slacking off"

AI does not get tired, but user experience can worsen for technical, cost and policy reasons. Analysts told reporters that shorter, evasive or templated replies can be the product of model training, safety tuning, and design choices that prioritize brevity or guarded answers for sensitive topics. It has been reported that some firms are actively reallocating compute to higher‑value products — for example, ByteDance shifting capacity toward monetizable services — and that access to top‑tier accelerators is constrained by export controls and supply limits, which reportedly raises operational costs for Chinese AI providers. The result: models may be tuned to conserve compute, reduce hallucinations, or avoid risky outputs, and users perceive that as "laziness."

What users can do — and what regulators should watch

So is the AI actually slacking? No — but expectations have risen. Professionals advise clearer prompts, explicit depth and format requests, and iterative follow‑ups to get fuller answers. For Western readers, this episode illustrates a familiar trade‑off in AI product design: accuracy, safety and cost versus user expectations. It also highlights a geopolitical backdrop — reported export controls and domestic resource allocation choices — that will shape how aggressively Chinese firms can scale model quality going forward.

AI
View original source →