The term distillation attacks mischaracterizes current trends in model training. Nathan Lambert argues that using larger models to generate synthetic data for smaller ones is standard practice, not a malicious exploit. This semantic shift matters because it frames routine optimization as a security threat. Researchers must distinguish between intentional data theft and legitimate knowledge transfer.