← Back to stories Close-up of a laptop screen displaying programming code with a cute plush toy reflecting.
Photo by Daniil Komov on Pexels
凤凰科技 2026-04-01

Anthropic folds screen-control and messaging into Claude Code, effectively becoming “OpenClaw”

A strategic pivot — or branding déjà vu?

Anthropic announced that it has integrated a "Computer use" feature into Claude Code and added a Channels interface, moves that observers say turn Claude Code into what the open-source project OpenClaw (formerly Clawdbot) had been doing. It has been reported that Anthropic previously asked the original Clawdbot project to rename itself because the pronunciation clashed with the company’s flagship model, and some now say the request foreshadowed Anthropic building the same capability in-house. Is this a branding spat or a deliberate push toward agent platforms? Either way, the practical effect is clear: Claude Code can now see and act on graphical user interfaces as well as accept live external event feeds.

What Computer use and Channels actually do

Computer use, which Anthropic first trialed publicly in late 2024 and said on March 31 is now in Claude Code, allows the model to read screens, click buttons, switch windows and follow web flows — essentially automating tasks that have no API and previously required human clicks. That raises the operational stakes: errors now affect real interfaces and real systems, not just text outputs. Channels is a complementary feature that Anthropic describes as an MCP (Message Control Protocol) server: a standardized way to pipe external messages, alerts or webhooks into a running Claude Code session so agents can be interrupted, woken or instructed by outside events.

Practical demos and risks

Reporters and documentation cite striking demos — everything from relaying in-game chat logs from World of Warcraft into Claude Code to using PowerShell to tail a file and forward messages into Channels — showing how any messaging source can become a control surface if it conforms to the Channel contract. That openness is powerful but also risky. Authentication, permissioning and long-term state management are nontrivial; past MCP implementations have been criticized as cumbersome in practice. Security researchers will want to know how Anthropic intends to harden these interfaces against spoofing, escalation and misuse.

Politics, personalities and governance

There’s also a human and geopolitical angle. It has been reported that a developer now at OpenAI publicly noted on X that “the next OpenClaw will also be an MCP” and called the situation “awkward,” suggesting personal and competitive tensions between projects. More broadly, as large AI firms race to build persistent, externally connected agents, regulators and export-control policymakers in the US, EU and elsewhere are watching closely — these systems blur lines between productivity tools and autonomous automation, and they raise new safety and sovereignty questions. Who gets to run agents that can control interfaces? And under what rules?

AI
View original source →