← Back to stories Detailed view of electronic circuit components with a soft focus effect.
Photo by Pok Rie on Pexels
ArXiv 2026-03-27

New arXiv preprint argues deterministic inference is the only path to “trustworthy AI”

What the paper claims

A new preprint on arXiv (arXiv:2603.24904) claims to formalize the foundations of “trustworthy AI” by tying trust directly to run-time determinism. The authors introduce what they call the Determinism Thesis — platform-deterministic inference is necessary and sufficient for trustworthy AI — and define a quantity they name trust entropy to measure the cost of non‑determinism. It has been reported that the paper proves an exact relation for verification failure: probability of verification failure = 1 − 2^{−H_T}, and that a “Determinism‑Verification Collapse” theorem links verification success to determinism in a mathematically tight way. The results are presented in a theoretical, proof‑based format; the document is a preprint and has not undergone peer review.

Why it matters

If the paper’s formalism holds up under scrutiny, it reframes a long‑running debate about AI safety and validation: you cannot reliably verify a model’s behavior unless its inference is platform‑deterministic. That has immediate consequences for how regulators, auditors and vendors think about model certification, reproducibility and third‑party testing. How do you certify an AI system that may change behavior depending on compiler, accelerator, or firmware? The authors argue that trust entropy quantifies that exact exposure.

Practical and geopolitical context

For Western readers unfamiliar with China’s tech landscape or global hardware supply chains: the claim is not only academic. Hardware and software heterogeneity — from different GPU/TPU implementations to varied firmware and driver stacks — is already a headache for reproducibility. Could this theoretical result sharpen policy debates over export controls, supply‑chain trust and cross‑border deployment of critical AI systems? Possibly. It has been reported that the paper’s prescriptions would push vendors toward stricter platform control and standardized stacks, which intersects with ongoing trade and security discussions around advanced semiconductors and trusted compute bases.

The paper is available on arXiv, which supports community experimentation through arXivLabs; readers should treat the preprint as a provocative, formal contribution to the conversation and await peer review and independent verification before adopting its policy conclusions.

Research
View original source →