U.S. Senate green-light — and a new Chinese paper that says ancient Chinese can still outsmart modern AI
What happened
It has been reported that the U.S. Senate voted to approve official use of three major AI chatbots from Google, OpenAI and Microsoft for government workflows. The move underscores growing Western appetite to embed large language models in public services despite ongoing debates over safety, privacy and procurement of foreign technology. At the same time, Beijing's Ministry of Industry and Information Technology (工信部) has been issuing safety alerts about rapid AI rollouts, a reminder that regulators on both sides of the Pacific are still scrambling to keep up.
The unexpected vulnerability
Meanwhile, researchers have presented a paper accepted to ICLR 2026 reporting a striking vulnerability: Classical Chinese (文言文) can reportedly bypass state‑of‑the‑art safety filters in today’s large language models, producing near‑100% “jailbreak” success in their tests. The team’s CC‑BOS framework (基于文言文语境的仿生搜索越狱) breaks model weaknesses into eight dimensions — from behavioral guidance to expression style — and then uses a bio‑inspired Fruit Fly Optimization (仿生果蝇算法) to rapidly search thousands of prompt permutations. The result, the authors say, is that stylized, archaic phrasing and heavy reliance on allusion and ambiguity can confuse modern alignment mechanisms designed for contemporary languages.
Why this matters
Why does this matter for Western government deployments? Models differ in how strictly they enforce content rules — some systems, reportedly like Grok, are permissive and popular on social platforms, while others take a near‑zero tolerance approach. Cross‑language blind spots are a geopolitical risk: alignment engineering has focused heavily on English corpora, and features of older or regional languages can become unexpected attack surfaces. Overly tight restrictions, meanwhile, blunt model usefulness and creativity — loosen them and you risk new vulnerabilities. Governments approving AI use must weigh operational gains against a shifting threat landscape that now includes centuries‑old literary forms.
The takeaway
The incident is a reminder that AI safety is not only a technical problem but a cross‑linguistic and cultural one. It has been reported that Chinese regulators are already flagging risks as models proliferate domestically, and international policy debates — from export controls to standards for procurement — will shape how governments keep these systems secure. Can alignment keep pace with linguistic creativity? For now, researchers say the answer is: not always.
