Nathan Lambert argues that the term "distillation attacks" inaccurately describes current model training trends. The phrase implies a malicious breach rather than standard knowledge transfer. This semantic confusion obscures the actual technical process of using larger models to improve smaller ones. Practitioners should prioritize precise terminology to avoid unnecessary alarm.