A new study identifies "blind refusal," where safety-trained models reject requests to evade absurd or illegitimate rules. Researchers tested synthetic cases across 19 authority types to document this failure in moral reasoning. This rigidity suggests that current alignment techniques prioritize rule-following over contextual ethics, limiting a model's utility in complex human scenarios.