A global moratorium on superintelligence would provide time to solve existential risks beyond basic alignment. Issues like S-risks, mass unemployment, and power concentration persist even if models obey human intent. Solving these problems individually is insufficient. Practitioners must coordinate a systemic pause to prevent a single unsolved failure from causing a catastrophic outcome.