Rakuten AI 3.0 built on China’s DeepSeek V3, reigniting Japan’s AI security row
What happened
Rakuten (楽天/乐天) this week announced Rakuten AI 3.0 as “Japan’s largest high‑performance AI model,” claiming roughly 700 billion parameters and a Japan‑focused design. But it has been reported that the model’s publicly posted config.json on Hugging Face lists the architecture as “DeepseekV3ForCausalLM” — a clear sign the system is a fine‑tuned instance of the China‑origin DeepSeek V3 model. Rakuten reportedly uploaded model weights to its official Hugging Face repository, where the site also auto‑tagged the card with “deepseek_v3.”
The technical footprint
The config visible on the model page lists hidden_size 7168, 61 layers, 256 routed experts and a vocab size of 129,280 — numbers that line up with DeepSeek V3’s roughly 681‑billion parameter footprint, which helps explain Rakuten’s “~700 billion” headline. DeepSeek V3 is open‑source and its license permits commercial reuse and downstream fine‑tuning, so on a legal reading Rakuten has not necessarily broken rules. But does legal permissibility equal public trust?
Reaction and policy backdrop
Public reaction was swift and noisy. Screenshots of the config circulated on X (formerly Twitter) and Japanese social channels, prompting ridicule and sharp questions about disclosure. The episode revives a broader regulatory and geopolitical headache: DeepSeek’s 2025 surge — dubbed by some Japanese outlets as an “AI black‑ship” moment — already prompted government warnings and corporate bans in Japan over data‑security and privacy risks. It has been reported that the Japanese digital minister advised civil servants to avoid or be cautious with DeepSeek, and major firms including Toyota, Mitsubishi Heavy and SoftBank have imposed restrictions on its use. For many in Tokyo, the concern is not licensing but control: who trains and audits the models, and what happens to sensitive inputs?
Why it matters
Technically, reusing an open model and fine‑tuning for local language is common practice. Politically and commercially, though, the incident exposes the tension between open‑source innovation and national security anxieties in an era when AI supply chains cross borders. Will firms and regulators adopt stricter provenance rules for model stacks? Or will this remain a PR problem solved by clearer disclosure? For now, Rakuten’s move is legal and explainable — but trust, once questioned, is harder to rebuild.
