Paper 2604.26233v1 analyzes how LLMs respond to opposing legal arguments when acting as decision-makers. Researchers tested whether models decide cases based on legal merit or the rhetorical skill of advocates. This vulnerability suggests that current models lack the stability required for judicial roles. Practitioners must prioritize consistency over fluency in legal automation.