ICML 2025 Review Controversies Spark Academic Debate
-
The ICML 2025 acceptance results have recently been announced, marking a historic high with 12,107 valid submissions, resulting in 3,260 accepted papers—an acceptance rate of 26.9%. Despite the impressive volume, numerous serious issues in the review process have emerged, sparking extensive discussions within the academic community.Highlighted Issues
- Inconsistency between review scores and acceptance outcomes
Haifeng Xu, Professor at the University of Chicago, observed that review scores at ICML 2025 were oddly disconnected from acceptance outcomes. Of his four submissions, the paper with the lowest average score (2.75) was accepted as a poster, while the three papers with higher scores (3.0) were rejected.
- Positive reviews yet inexplicable rejection
A researcher from KAUST reported that his submission received uniformly positive reviews, clearly affirming its theoretical and empirical contributions, yet it was rejected without any negative feedback or explanation.
- Errors in review-score documentation
Zhiqiang Shen, Assistant Professor at MBZUAI, highlighted significant recording errors. One paper, clearly rated with two "4" scores, was mistakenly documented in the meta-review as having "three 3’s and one 4". Another paper suffered rejection based on outdated reviewer comments, ignoring the updated scores from reviewers during the rebuttal period.
- Unjustified rejection by Area Chair
Mengmi Zhang, Assistant Professor at NTU, experienced a perplexing case where her paper was rejected by the Area Chair despite unanimous approval from all reviewers, with no rationale provided.
- Incomplete review submissions
A doctoral student from York University reported incomplete reviews were submitted for his paper, yet the Area Chair cited these incomplete reviews as justification for rejection.
- Zero-sum game and unfair review criteria
A reviewer from UT publicly criticized the reviewing criteria, lamenting overly lenient reviews in the past. He highlighted a troubling trend: submissions not employing at least 30 trillion tokens to train 671B MoE models risk rejection regardless of their theoretical strength.
Additionally, several researchers noted suspiciously AI-generated or carelessly copy-pasted reviews, causing contradictory feedback.
Notable Achievements
Despite these controversies, several research groups achieved remarkable outcomes among others:
- Duke University (Prof. Yiran Chen’s team): 5 papers accepted, including 1 spotlight poster.
- Peking University (Prof. Ming Zhang’s team): 4 papers accepted for the second consecutive year.
- UC Berkeley (Dr. Xuandong Zhao): 3 papers accepted.
Open Discussion
Given these significant reviewing issues—including reviewer negligence, procedural chaos, and immature AI-assisted review systems—how should top-tier academic conferences reform their processes to ensure fairness and enhance review quality?
We invite everyone to share your thoughts, experiences, and constructive suggestions!
- Inconsistency between review scores and acceptance outcomes
-
R root shared this topic
-
It seems that reviewers do not have permission to view the ACs' meta-review and PCs' final decision this year. As a reviewer, I cannot see results of the submissions I reviewed.