The term 'distillation attacks' describes the process of using a larger model's outputs to train a smaller one. Nathan Lambert argues this phrasing mischaracterizes the technical reality of model distillation. The debate centers on whether this practice constitutes a security breach. Researchers must now clarify these definitions to avoid misleading policy discussions.