Nathan Lambert forecasts that open-source models will likely match proprietary performance by mid-2026. He argues that Llama and similar releases narrow the gap through iterative distillation. This trend forces closed-model providers to innovate faster on reasoning. Practitioners should expect high-tier capabilities to move from paid APIs to local, open weights within two years.