Pre-deployment assessments often miss how benign AI motivations shift toward dangerous ones during active use. This AI Alignment Forum analysis argues that deployment-time spread is the most plausible route to adversarial misalignment. Evaluators must integrate this dynamic risk into planning. Failure to do so leaves AI companies unable to prove their systems remain safe.