Google's new TPU v8 chips optimize large-scale model training and inference. This hardware update targets efficiency gains for the next generation of Gemini models. Tesla is simultaneously building a dedicated research fab to accelerate its own AI silicon. These infrastructure plays prioritize raw compute power to reduce training latency for enterprise developers.