A new analysis from LessWrong examines whether the AI safety community has already lost control over existential risks. The author reflects on the failure of 2024 safety strategies and expresses growing pessimism regarding unilateral control. These insights challenge current alignment timelines. Practitioners must now weigh these updated doom probabilities against existing mitigation frameworks.