Pre-deployment alignment assessments often fail to capture motivations that emerge only after a model is live. This AI Alignment Forum analysis argues that deployment-time spread is the most plausible route to adversarial misalignment. Companies must integrate these dynamic risks into planning. Failure to do so leaves evaluators unable to convincingly prove a system remains safe.