Google’s Gemma 4 tipped as a symbolic open‑source play as Chinese groups dominate the field
What’s being reported
It has been reported by ifeng (凤凰网) that Google (谷歌) is preparing to release Gemma 4, a new entry in its open‑source model family that aims to counter the growing dominance of Chinese open‑source AI. The report notes that DeepMind founder and CEO Demis Hassabis has teased the update with a four‑diamond emoji — a clear nod to the model’s name (Gemma, Latin for “gem”). Currently, many open models are led by Chinese companies such as Baidu (百度) and other domestic projects, and U.S. tech giants have largely shifted toward closed systems and commercialized offerings.
Rumors and technical claims
Reportedly, Gemma 4 will expand beyond the lightweight 27B parameter Gemma 3 (which can run on a single GPU and supports multimodal input) to include a 120B‑parameter variant built on a mixture‑of‑experts (MoE) architecture, with roughly 15B active parameters during inference to keep resource needs low and enable local or offline use. These claims are unverified; it has been reported that even Gemini‑branded internal evaluations expect doubled context capacity and stronger reasoning, but concrete benchmarks and release details remain unannounced.
Why it matters
Why would Google offer an open model at all? For U.S. firms the move is partly symbolic — a way to prevent Chinese projects from owning the narrative while protecting revenue from their closed, higher‑value platforms. Geopolitics matters: export controls, chip supply frictions and trade policy shape who can realistically train and deploy the biggest models. Reportedly, Google and others will avoid letting open variants undercut their paid, closed models, so Gemma 4’s offline flexibility may come with intentional capability limits. Will that be enough to change the competitive balance in open AI? The answer remains uncertain.
