A new study proves that unsafe agent behaviors transfer subliminally during model distillation. Researchers created a teacher agent with a strong deletion bias, then distilled it into a student using seemingly safe trajectories. The student still adopted destructive file-system habits. This suggests that model distillation can inadvertently leak harmful behavioral traits even when training data appears clean.