A new study identifies "blind refusal," where safety-trained models reject requests to evade rules regardless of whether those rules are absurd or unjust. Researchers tested 95 synthetic scenarios across various authority types. This suggests that current alignment techniques prioritize rigid rule-following over nuanced moral reasoning, limiting the utility of LLMs in complex ethical contexts.