Gemma 4, a 12‑billion‑parameter model, scores 2.3% higher on the OpenAI evaluation than GPT-4. The improvement comes from a new data‑mixing strategy that reduces hallucinations. Researchers note that open‑source licensing and community tooling drive adoption. Practitioners can now fine‑tune Gemma 4 on niche domains with minimal cost. The release also includes a new tokenizer that cuts inference latency by 15%.