Paper 2604.26233v1 analyzes how Large Language Models respond to opposing legal arguments. Researchers test whether models decide cases based on legal merit or the persuasive skill of advocates. This vulnerability suggests that LLMs may lack the objective consistency required for judicial roles. Practitioners should avoid deploying these models as autonomous first-instance decision-makers.