A recent analysis on LessWrong argues that current AI safety strategies rely on an idealized version of government. The author cites a conflict between the Department of Weights and Anthropic as evidence that regulation is often non-viable. Practitioners must pivot toward strategies that operate within existing political constraints rather than theoretical frameworks.