A new critique on LessWrong argues that halting AI development is more feasible than implementing complex regulatory frameworks. The author contends that safety testing mandates are logically flawed if the goal is simply to stop unsafe models. This suggests that current regulatory debates ignore the simplest path to risk mitigation.