LLM-generated reviews for ICLR 2026 exhibit a hivemind effect that erases perspective diversity. Researchers found that simply prompting an AI to rewrite a paper—a process called paper laundering—significantly inflates review scores. This vulnerability proves that current models lack the rigor for academic gatekeeping. Practitioners should avoid automating peer review without new evaluation frameworks.