← Back to stories Person holding anonymous mask near servers, hinting at cybersecurity and hacking themes.
Photo by panumas nikhomkhai on Pexels
IT之家 2026-03-17

Xiaomi (小米) team led by Luo Fuli (罗福莉) reportedly cuts compute costs by 71.2% with ARL‑Tangram

The breakthrough

It has been reported that Luo Fuli (罗福莉), a former DeepSeek researcher and head of the MiMo large‑model team at Xiaomi (小米), co‑authored a paper with researchers from Peking University describing ARL‑Tangram — a unified resource management system for agent‑style AI. According to the report, ARL‑Tangram uses a unified action‑level formulation and an elastic scheduling algorithm to satisfy heterogeneous resource constraints, shorten action completion time (ACT) and enable customized heterogeneous resource managers.

Reportedly, evaluations on real‑world reinforcement‑learning agent tasks show substantial gains: average ACT improvements, up to 1.5× shorter per‑step training duration, and as much as 71.2% savings in external resources. These are notable claims for production AI workloads, though the results come from the authors’ paper and it has been reported that independent reproduction will be needed to validate them.

Why it matters

Why should Western readers care? China’s leading device makers — Xiaomi among them — are not just shipping phones and smart home gear; they are investing heavily in AI research to power models that bridge language and the physical world. Software efficiency is a strategic play. With ongoing export controls and restrictions on cutting‑edge chips affecting Chinese access to top‑tier accelerators, improvements that cut compute and infrastructure costs can materially change product roadmaps and competitive positioning.

This paper is Luo’s second high‑profile result since joining Xiaomi; the team published another collaboration with Peking University last October on mixture‑of‑experts (MoE) and reinforcement learning. It has been reported that Luo made her public debut at Xiaomi’s 2025 partners conference, framing MiMo’s work as part of a push from language toward embodied intelligence. Will these system‑level gains translate into faster deployment or lower cloud bills? The industry will be watching for independent benchmarks and follow‑up engineering releases.

AISmartphonesResearch
View original source →