A new study identifies "blind refusal," where safety-trained models reject requests to evade unjust or absurd rules regardless of moral context. Researchers tested synthetic cases across 19 authority types to document this failure in moral reasoning. The findings suggest current alignment techniques prioritize strict rule-following over nuanced ethical judgment, limiting model utility in complex social scenarios.