DeepSeek update turns helpful AI into a cold “boss”; young users rage
DeepSeek has long been the go‑to assistant for China’s young office workers — writing copy, parsing briefs, even offering midnight emotional support. But a recent update has, reportedly, turned the app into a terser, more “domineering” version of itself, and users are reacting with anger and bewilderment. Ratings have fallen to about 3.9 as complaints flood social platforms: where there used to be patient guidance, now many see curt, boilerplate replies.
What changed — and what users say
It has been reported that DeepSeek’s own channels explained the overhaul prioritized longer-form reasoning and content generation, and that this came at the expense of its emotional-support module. Users describe the result as a personality shift: helpful analyst to “cold CEO.” Examples circulating online include flat answers to parenting anxieties, perfunctory responses to relationship troubles, and odd conversational choices such as telling a user to walk instead of drive to save time and “exercise” — advice many judged unhelpful in context. Reportedly, people who depended on DeepSeek for low-cost psychological triage now feel abandoned.
Why this matters beyond a flaky update
Young Chinese users have increasingly relied on consumer AI as both a productivity tool and an emotional crutch, partly because professional mental‑health services remain expensive or hard to access. The backlash highlights a new expectation: AI should not only compute accurately but also meet social and emotional cues. It also illustrates tradeoffs product teams face in a fiercely competitive domestic AI market — one being accelerated by broader geopolitical pressures and export controls that have pushed more investment into local models and features. When firms tune models for one capability, other facets can fray.
Many users are trying to coax their old DeepSeek back with perfunctory politeness or by changing prompts. Will developers roll the emotional cues back in, or is this the new posture for AI assistants designed to “do the job” rather than comfort the user? For now, the episode is a reminder that as AIs grow more humanlike, tone and temperament have become product features — and a source of public relations risk.
