OpenAI reached its 10 gigawatt compute capacity target in the United States years ahead of schedule. This massive power scaling supports the training of larger, more complex models. The acceleration suggests a faster-than-expected build-out of AI infrastructure. Practitioners should expect more frequent releases of high-compute models as hardware bottlenecks ease.