A new study of ICLR 2026 reviews reveals that AI reviewers suffer from a hivemind effect, slashing perspective diversity. Researchers found that simple LLM-driven rewriting, or "paper laundering," trivially inflates scores. This vulnerability makes current automated review systems unreliable. Practitioners should avoid replacing human peer review with LLMs until rigorous evaluation benchmarks are established.