The term "distillation attacks" inaccurately describes current model training trends. Nathan Lambert argues that using a larger model to train a smaller one is standard practice, not a malicious exploit. This linguistic slip confuses legitimate knowledge transfer with security breaches. Researchers must refine their terminology to avoid unnecessary panic regarding model provenance.