The term "distillation attacks" inaccurately describes current trends in model training. Nathan Lambert argues that using a larger model's outputs to train a smaller one is standard practice, not a malicious exploit. This semantic confusion obscures the actual technical challenges of data quality. Researchers must distinguish between legitimate knowledge transfer and adversarial data theft.