The term distillation attacks inaccurately describes current model training trends. Nathan Lambert argues that using outputs from larger models to train smaller ones is standard practice, not a malicious exploit. This semantic confusion obscures the actual technical process of knowledge transfer. Researchers must refine their terminology to avoid unnecessary alarmism within the AI community.