The term distillation attacks describes the process of using a larger model's outputs to train a smaller one. Nathan Lambert argues this phrasing is misleading and creates unnecessary panic within the research community. The debate centers on whether this practice constitutes a security breach or standard knowledge transfer. Practitioners should focus on data provenance instead.