The Qwen3.6-35B-A3B model activates only three billion parameters per token. Despite this efficiency, it beats Gemma 4-31B on agentic coding and reasoning benchmarks. Alibaba open-sourced the model to challenge larger competitors. This proves that sparse architectures can match or exceed the performance of denser models without increasing the computational overhead for developers.