Anthropic’s turnaround: from Pentagon snub to Wall Street’s new AI tool
From national security to financial security
Anthropic, one of the better-known U.S. AI startups, has ridden a sharp rebound. It was reportedly shut out of Pentagon projects after balking at unfettered military access. Now, it has been reported that U.S. Treasury and Federal Reserve officials have pressed Wall Street to take the company seriously — even nudging major banks to test Anthropic’s new Mythos model. From being kept at arm’s length by defense buyers to being pulled into high-level financial policy conversations: why the sudden pivot?
Banks told to probe the model
It has been reported that senior regulators suggested banks use Mythos to probe and remediate their own systems and to better understand how large language models could pose threats to financial institutions. Anthropic’s safety team has published a case showing Mythos can identify multi-vulnerability exploit chains against browsers — a class of attacks that, historically, has breached even hardened targets (think Stuxnet). The message to banks was blunt: AI can be both a tool for defense and a vector for new attacks, so testing is urgent.
Rapid enterprise adoption and market implications
Enterprise uptake is accelerating. It has been reported, citing Financial Times and payments firm Ramp, that adoption of Anthropic’s tools among U.S. companies jumped in March and that overall enterprise adoption has been rising since 2023 — reportedly narrowing the gap with OpenAI. App and market-data firms have shown Claude’s downloads and active-user metrics surging month-on-month, even as ChatGPT’s consumer growth slows. The result? Regulators and financial firms now treat AI vendors not only as suppliers but also as potential systemic risk points — and Anthropic, for better or worse, is squarely in the frame.
