The term "distillation attacks" describes the process of using a larger model's outputs to train a smaller one. Nathan Lambert argues this terminology is misleading. It frames a standard training technique as a malicious breach. Researchers must distinguish between legitimate knowledge transfer and actual security vulnerabilities to avoid unnecessary panic in the AI research community.