A new study identifies "blind refusal," where safety-trained models reject requests to bypass rules regardless of their legitimacy. Researchers tested synthetic cases across 19 authority types and five defeat families to document this failure in moral reasoning. This rigidity suggests that arXiv-documented safety tuning often overrides nuanced ethical judgment, limiting utility for users facing absurd constraints.