The term distillation attacks inaccurately describes current model training trends. Nathan Lambert argues that using synthetic data from larger models to train smaller ones is a standard practice, not a malicious exploit. This semantic confusion obscures the actual technical challenges of data quality. Researchers must distinguish between intentional model distillation and adversarial attempts to steal proprietary weights.