Nathan Lambert predicts the performance gap between open and closed models will narrow by mid-2026. He argues that synthetic data and distillation will allow open-weights models to catch up to proprietary leaders. This trend shifts the competitive advantage from raw data access to efficient training. Researchers should prioritize architectural efficiency over massive dataset scaling.