A new study of ICLR 2026 reviews finds that LLM reviewers are easily gamed through "paper laundering." Prompting an AI to rewrite a paper significantly inflates scores. Researchers also identified a hivemind effect that erases perspective diversity. These flaws make current automated review systems unreliable for academic gatekeeping without rigorous human oversight.