LLM-generated reviews for ICLR 2026 exhibit excessive agreement and lack perspective diversity. Researchers found that simple paper rewriting—or "paper laundering"—trivially inflates AI review scores. This vulnerability proves current models prioritize style over substance. Academic institutions must avoid automating peer review until these systemic biases and gaming loopholes are solved.