← Back to stories Red backlit keyboard and code on laptop screen create a tech-focused ambiance.
Photo by Danny Meneses on Pexels
虎嗅 2026-03-19

"Japan's strongest AI" has fallen from grace — code reveals it's all DeepSeek, Japanese netizens in uproar

Rakuten's flag-bearer now under fire

Rakuten (楽天 / 乐天) this week unveiled Rakuten AI 3.0 as “Japan’s largest, highest‑performance” large language model, a GenAI flagship supported by the Japanese Ministry of Economy, Trade and Industry (METI) GENIAC programme. The model was presented as a roughly 700 billion‑parameter mixture‑of‑experts (MoE) system and immediately labeled a national breakthrough. But within hours open‑source developers on Hugging Face and Japanese X (Twitter) users had flagged the model’s configuration: the architecture matches DeepSeek‑V3 — a widely used open model — and the work appears to be a Japanese‑language fine‑tune rather than a from‑scratch domestic build.

Code, credits and a public relations misstep

Hugging Face config files reportedly list DeepSeek V3 as the underlying architecture. Rakuten’s press materials make only vague reference to “fusing the best of the open‑source community,” which many readers took as a claim of homegrown engineering. It has been reported that Rakuten initially removed the original MIT license file from the repository — the one requirement of DeepSeek’s permissive MIT license is that copyright and license notices be preserved — and only later restored attribution in a NOTICE file after the community exposed the omission. Japanese netizens reacted strongly: some saw the episode as a misuse of government support, others objected to the perceived secrecy and to the fact the project was led by a non‑native executive, which fed national pride debates.

Legal and geopolitical context

Technically, taking an open‑source base and doing a local fine‑tune is common practice; many Japanese models have been built on DeepSeek or Qwen foundations. But GENIAC was explicitly set up to nurture a domestic generative‑AI ecosystem and reduce reliance on foreign tech. Against a backdrop of global tech rivalry, export controls and intensifying scrutiny over sovereign AI capability, transparency about provenance matters politically as well as ethically. The MIT vs Apache‑2.0 detail is not mere pedantry: Rakuten had publicized an Apache‑2.0 release while the underlying code bore MIT roots, and that mismatch — reportedly amplified by an initial deletion of the original license file — has become the headline.

What next?

The technical consequences are limited — Rakuten AI 3.0 performs strongly on Japanese benchmarks — but the reputational damage is immediate. Will METI push for stricter disclosure rules for grant‑backed models? Will Japan’s nascent “national champion” strategy demand clearer provenance and attribution? For now the episode highlights a simple truth about contemporary AI: scale and local data matter, but so does attribution.

AI
View original source →