← Back to stories A hacker in a black hoodie using a tablet displaying a skull, surrounded by chalk symbols and 'Hacker Attack' text.
Photo by Lucas Andrade on Pexels
钛媒体 2026-04-06

AI Becomes the Strongest Battering Ram: Nation‑level iOS Exploit Goes Public

State‑grade tools hit the open market

It has been reported that a six‑bug iOS exploit suite called DarkSword (暗剑) was uploaded in full to GitHub in March 2026, and China’s Ministry of Industry and Information Technology (工信部) issued an emergency warning on April 3. What was once the preserve of a handful of advanced teams — zero‑click chains that move from remote browser code execution to full kernel compromise — is now available as open‑source code. The riskiest element: the kit reportedly bypasses Apple’s pointer authentication (PAC) on A12 and later chips, turning a nation‑grade “nuclear button” into a commodity tool.

The weapon and its victims

DarkSword reportedly targets iOS 18.4–18.7 devices and exposes about 221 million iPhones — roughly 14.2% of the installed base — according to the same reporting. The attack payload dubbed GHOSTBLADE is said to automate rapid theft of crypto assets from services like Coinbase, Binance and MetaMask and then erase traces within minutes. Apple has been urged to issue wide patches; it has been reported that emergency fixes are being pushed even to legacy devices such as the iPhone 6S.

AI in the cockpit: lessons from Kuaishou

The leak follows a December 2025 incident on Kuaishou (快手) that security firms have characterized as an AI‑driven stress test of platform defences: about 17,000 bot accounts simultaneously streamed pre‑flagged content to probe detection and takedown delays. Analysts reported that attackers used AI to calibrate content survival windows and to swamp execution interfaces so that detection became moot — the system saw violations but could not act. Qihoo 360 founder Zhou Hongyi (周鸿祎) has warned that large language models lower the bar for attackers; HackerOne reported a 210% rise in AI‑assisted vulnerability reports in 2025.

Implications: an arms race at machine speed

Two takeaways are stark. First, AI is accelerating both the development and distribution of exploit code: what used to take months and elite expertise can now be compressed to weeks and scaled by automated agents. Second, defenders must adopt AI at scale too — automated “security robots” that focus on behavioural patterns and automated circuit breakers, not only smarter content filters. This dynamic plays out amid broader geopolitical tensions — U.S. export controls on advanced chips and AI tools and the global tech supply‑chain debate — which complicate remediation and attribution. If nation‑level cyber weapons become public and AI drives attack automation, the contest shifts from people versus people to intelligent system versus intelligent system. How long before defence models must literally out‑think the attackers?

AISmartphones
View original source →