← Back to stories Aerial shot of a military tank parked on rugged terrain captured by drone.
Photo by Samir Smier on Pexels
虎嗅 2026-03-09

AI Reportedly Took the Lead in a Major Strike—And Became a Target Itself

What happened

It has been reported that artificial intelligence played a decisive role in a U.S.-Israel precision strike on Iran on February 28, marking a first in modern warfare. According to multiple media accounts cited by Chinese business publication Huxiu (虎嗅), a classified, military-only version of Anthropic’s Claude—allegedly one to two generations ahead of the public model—conducted intelligence assessment, target identification, and operational simulation for the mission. The report surfaced days after former U.S. President Donald Trump reportedly signed an executive order on February 27 directing federal agencies to phase out Claude over national security concerns. Then, on March 3, a missile strike on an Amazon Web Services (AWS) data center in the United Arab Emirates reportedly caused power loss and widespread service disruption impacting parts of Claude’s infrastructure. If accurate, the episode suggests AI now shapes both the battlefield—and the battles over cloud infrastructure.

How the AI reportedly worked

U.S. military officials have long touted “end-to-end reasoning” in classified trials, evolving beyond AI as mere decision support. Publicly known efforts like Project Maven and the U.S. Army’s Scarlet Dragon exercises already demonstrated dramatic compression of the “sensor-to-shooter” loop, with target recognition times reportedly falling from hours to under a minute by 2024. In the latest operation, the purported “battlefield agent” fused satellite feeds, intercepted signals, and open-source data into rapid, probabilistic plans, from choosing suppression-of-air-defense paths to orchestrating drone swarms. Another striking claim: U.S. systems allegedly overloaded Iranian air defenses with algorithmically tailored false targets—akin to “physical-layer prompt injection”—forcing adversary AI into recursive misclassification and compute exhaustion. Hallucinations aren’t just a chatbot problem, the sources suggest; they can now be weaponized.

The bigger picture

Why does this matter? Because the center of gravity shifts from hardware counts to AI orchestration. High-end missiles and stealth aircraft become, in effect, “peripherals” to a warzone agent. Cheap drones gain strategic punch when backed by massive real-time inference. That, in turn, elevates advanced chips and cloud regions to frontline assets. Export controls on cutting-edge GPUs, sanctioned access to compute, and the physical security of hyperscale data centers—already flashpoints amid U.S. tech restrictions and longstanding sanctions on Iran—gain new urgency. In such a world, who controls AI, controls escalation ladders. And cloud providers and foundation-model companies find themselves drawn, willingly or not, into national-security contests.

What to watch

Key elements of this narrative remain unverified and may be impossible to confirm independently given the secrecy of military networks and covert operations. But the direction of travel is clear: autonomy is climbing the kill chain, from spotting to deciding. Expect sharpened debate in Washington and allied capitals over model access controls, AI safety in weapons systems, and resilience of overseas cloud infrastructure. Also expect adversaries to iterate: if logic overloading can fell air defenses, countermeasures and AI “red-teaming” at the physical and protocol layers will follow. The precedent—real or rumored—already changes incentives on all sides.

AI
View original source →