A new analysis on LessWrong argues that stopping AI development is more feasible than regulating it. The author claims policymakers often underestimate the urgency of extinction risks, leading to overly cautious regulatory preferences. This perspective advocates for extreme measures, such as seizing advanced chips, to prevent catastrophic outcomes for practitioners and society.