Paper 2604.26233 examines how Large Language Models handle conflicting legal arguments. Researchers found that models often fail to balance genuine legal persuasion against the mere skill of an advocate. This instability creates risks for judicial automation. Practitioners must verify if a model's decision stems from law or simply the prompt's rhetorical strength.