← Back to stories Close-up of HTML code displayed on a MacBook Pro screen, showcasing modern web development.
Photo by Digital Buggu on Pexels
IT之家 2026-04-08

DeepSeek launches “Expert Mode” to handle harder questions

What changed

DeepSeek has added a new “Expert Mode” to its interface alongside an existing “Quick Mode,” marking the first time the service has introduced a tiered interaction model in the product. Quick Mode is aimed at everyday, instant responses and continues to support OCR for images and files. Expert Mode is designed for complex problems, offering deeper reasoning and an “intelligent search” capability — but it currently does not support file uploads or multimodal inputs, and users are warned that they may experience wait times during peak demand.

Early impressions and unverified details

From hands‑on reports, token throughput in the new Expert Mode feels unusually fast. It has been reported that the new mode may or may not be running the rumored “DeepSeek V4” model; that claim remains unverified. Screenshots circulating online reportedly show an additional “Visual Mode,” although IT Home did not encounter that option in its own checks.

Why this matters (for Western readers)

DeepSeek is part of China’s rapidly evolving AI assistant scene, alongside better‑known names such as Baidu (百度) and others. Mode layering — separating fast, lightweight conversations from slower, higher‑cognition sessions — is one way Chinese AI products are trying to balance user expectations for speed versus depth. This rollout comes as China’s AI industry continues to advance under constraints such as US export controls on high‑end chips, which shape how companies prioritize software and product design.

Outlook

Will Expert Mode stick as a distinct offering, or be folded into a single flexible agent? That remains to be seen. For now, the change signals product‑level maturation: DeepSeek is moving from a single chat interface to more explicit user choices about latency, modality and depth — and users and observers will be watching whether promised multimodal and visual features follow.

SmartphonesSpace
View original source →