Researcher Nathan Lambert challenges the term "distillation attacks" to describe current model training trends. He argues the phrasing misrepresents the technical process of using larger models to train smaller ones. This semantic debate highlights a growing friction between academic rigor and industry hype. Practitioners should distinguish between adversarial attacks and standard knowledge distillation.