Google released TPU v8, its latest custom accelerator designed for massive model training. Meanwhile, Tesla is building a dedicated research fab to optimize its own silicon. These hardware pivots aim to reduce reliance on external chip suppliers. Practitioners should expect faster training cycles and lower inference costs as custom silicon matures.