← Back to stories A row of industrial machines on a factory floor, highlighting manufacturing equipment.
Photo by Mazhar Ulazhar on Pexels
凤凰科技 2026-04-16

Anthropic tightens access and turns Claude Code into an automated dev workstation

Rapid updates, new releases — and a verification flap

Anthropic has been shipping updates at breakneck pace. It has been reported that the company will roll out Opus 4.7 this week and that a recently leaked tool resembles “Lovable,” an AI design assistant. At the same time, users say Anthropic is introducing identity verification for “certain usage scenarios” in Claude — a move many on Chinese social media have reportedly read as an attempt to impose real-name checks on Chinese users. Anthropic’s stated rationale is straightforward: powerful technology must be used responsibly, and the platform needs to know “who is using it.”

Claude Code becomes a background worker and full-featured IDE

Beyond gating, Anthropic quietly rebuilt its desktop Claude Code into a more complete development workspace. The redesign adds a left-hand sidebar, side-by-side Claude sessions, integrated terminal, file editor, HTML/PDF preview, a faster diff viewer and drag‑and‑drop layouts so multiple agents and conversations can be monitored in a single window. The headline feature is Routines: cloud‑hosted, persistent automation packages that bundle prompts, repos, connectors and runtime environments, and which can run on Anthropic’s infrastructure even with your laptop closed.

Automation built for engineering workflows

Routines are explicitly aimed at software workflows. They can be triggered by GitHub events — PRs, pushes, issues, workflow runs — by API calls, or on schedules; a single Routine can combine multiple triggers. Typical use cases: nightly log aggregation, auto-labeling and summarizing new issues, code review automation and deployment validation, or incident-driven pulls of diagnostic data with repair suggestions. Access is tiered and rate‑limited: Pro users can run up to five routines per day, Max 15, Team and Enterprise 25.

What this means for China and the wider tech landscape

Why does this matter outside Anthropic’s user base? For Chinese developers and enterprises, the verification step has already raised questions about access and data governance in a market where real‑name rules and cross‑border compliance matter. More broadly, Western AI firms are navigating a fraught geopolitical environment — export controls, sanctions and national security scrutiny — while trying to scale safety controls without blocking whole countries. Anthropic’s moves show the tradeoffs: faster product innovation and stronger automation on one hand, and tighter, potentially region‑sensitive access controls on the other.

AI
View original source →