A new dataset of synthetic cases reveals that safety-trained models routinely refuse to help users evade rules, even when those rules are absurd or illegitimate. This blind refusal occurs regardless of whether the authority is defensible. The findings suggest current alignment techniques prioritize strict compliance over nuanced moral reasoning for practitioners.