🔥 ICML 2025 Review Results are Coming! Fair or a Total Disaster? 🤯
-
334 222 234 124 335
223 344 445 233 122 -
ICML 2025 Sample Paper Scores reported by communities
Paper / Context Scores Notes Theoretical ML paper 4 4 4 3 Former ICLR desk-reject; ICML gave higher scores, hopeful after rebuttal. Attention alternative 3 2 1 2 Lacked compute to run LLM benchmarks as requested by reviewers. GNN Paper #1 2 2 2 2 Reviewer misunderstanding; suggested irrelevant datasets. GNN Paper #2 2 1 1 2 Criticized for not being SOTA despite novelty. Multilingual LLM 1 1 2 3 Biased reviewer compared with own failed method. FlashAttention misunderstanding 1 2 2 3 Reviewer misread implementation; lack of clarity blamed. Rebuttal-acknowledged paper 4 3 2 1 → 4 3 2 2 Reviewer accepted corrected proof. Real-world method w/o benchmarks 3 3 3 2 Reviewer feedback mixed; lacks standard benchmarks. All ones 1 1 1 Author considering giving up; likely reject. Mixed bag (NeurIPS resub) 2 2 1 Reviewer ignored results clearly presented in own section. Exhaustive range 2 3 4 5 “Only needed a 1 to collect all scores.” Borderline paper (Reddit) 2 3 5 5 Rejections previously; hopeful this time. Balanced but low 3 2 2 2 Reviewer feedback limited; author unsure of chances. Another full range 1 3 5 Author confused by extremes; grateful but puzzled. Extra reviews 1 2 3 3 3 One adjusted score during rebuttal; one reviewer stayed vague. Flat scores 3 3 3 3 Uniformly weak accept, uncertain accept probability. High variance 4 4 3 1 Strong and weak opinions; outcome unclear. Review flagged as LLM-generated 2 1 3 3 LLM tools flagged 2 reviews as possibly AI-generated. Weak accept cluster 3 3 2 Reviewers did not check proofs or supplementary material. Very mixed + LLM suspicion 2 3 4 1 2 Belief that two reviews are unfair / LLM-generated. Lower tail 2 2 1 1 Reviewer comments vague; possible LLM usage suspected. Low-medium range 1 2 3 Concerns reviewers missed paper’s main points. Long tail + unclear review 3 2 2 1 Two willing to adjust; one deeply critical with little justification. Slightly positive 4 3 2 Reviewer praised work but gave 2 anyway. Mixed high 4 2 2 5 Confusing mix, but "5" may pull weight. Middle mix 2 2 4 4 Reviewers disagree on strength; AC may play key role. More reviews than expected 3 3 3 2 2 2 Possibly emergency reviewers assigned. Strong first reviewer 3 2 2 Others gave poor quality reviews; unclear chance. Pessimistic mix 3 2 1 Reviewer willing to increase, but others not constructive. Hopeless mix 1 2 2 3 Reviewer missed key ideas; debating NeurIPS resub. Offline RL 2 2 2 Still decide to rebuttal, but not enough space for additional results Counterfactual exp. 1 2 2 3 Got 7 7 8 8 from ICLR yet still rejected by ICLR2025! This time the scores are ridiculous! -
ICML 2025 Review – Most Outstanding Issues
Sources are labeled whenever suited
1. 🧾 Incomplete / Low-Quality Reviews
- Several submissions received no reviews at all (Zhihu).
- Single-review papers despite multi-review policy.
- Some reviewers appeared to skim or misunderstand the paper.
- Accusations that reviews were LLM-generated: generic, hallucinated, overly verbose (Reddit).
2.
Unjustified Low Scores
- Reviews lacked substantive critique but gave 1 or 2 scores without explanation.
- Cases where positive commentary was followed by a low score (e.g., "Good paper" + score 2).
- Reviewers pushing personal biases (e.g., “you didn’t cite my 5 papers”).
3. 🧠 Domain Mismatch
- Theoretical reviewers assigned empirical papers and vice versa (Zhihu).
- Reviewers struggling with areas outside their expertise, leading to incorrect comments.
4.
Rebuttal System Frustrations
- 5000-character rebuttal limit per reviewer too short to address all concerns.
- Markdown formatting restrictions (e.g., no multiple boxes, limited links).
- Reviewers acknowledged rebuttal but did not adjust scores.
- Authors felt rebuttal phase was performative rather than impactful.
5. 🪵 Bureaucratic Review Process
- Reviewers forced to fill out many structured fields: "claims & evidence", "broader impact", etc.
- Complaint: “Too much form-filling, not enough science” (Zhihu).
6.
Noisy and Arbitrary Scoring
- Extreme score variance within a single paper (e.g., 1/3/5).
- Scores didn’t align with review contents or compared results.
- Unclear thresholds and lack of transparency in AC decision-making.
7.
Suspected LLM Reviews (Reddit-specific)
- Reviewers suspected of using LLMs to generate long, vague reviews.
- Multiple users ran reviews through tools like GPTZero / DeepSeek and got LLM flags.
8.
Burnout and Overload
- Reviewers overloaded with 5 papers, many outside comfort zone.
- No option to reduce load, leading to surface-level reviews.
- Authors and reviewers both expressed mental exhaustion.
9.
Review Mismatch with Paper Goals
- Reviewers asked for experiments outside scope or compute budget (e.g., run LLM baselines).
- Demands for comparisons against outdated or irrelevant benchmarks.
10.
️ Lack of Accountability / Transparency
- Authors wished for reviewer identity disclosure post-discussion to encourage accountability.
- Inconsistent handling of rebuttal responses across different ACs and tracks.
-
Even if a rebuttal is detailed and thorough, reviewers often only ACK without changing the score. This usually means they accept your response but don’t feel it shifts their overall assessment enough. Some see added experiments as “too late” or not part of the original contribution. Others may still not fully understand the paper but won’t admit it. Unfortunately, rebuttals prevent score drops more often than they raise scores.
-
Even if a rebuttal is detailed and thorough, reviewers often only ACK without changing the score. This usually means they accept your response but don’t feel it shifts their overall assessment enough. Some see added experiments as “too late” or not part of the original contribution. Others may still not fully understand the paper but won’t admit it. Unfortunately, rebuttals prevent score drops more often than they raise scores.
-
I’d like to add by amplifying a few parts of the experience shared by XY天下第一漂亮, because it represents not just a “review gone wrong” — but a systemic breakdown in how feedback, fairness, and reviewer responsibility are managed at scale.
A Story of Two "2"s: When Reviews Become Self-Referential Echoes
The core absurdity here lies in the two low-scoring reviews (Ra and Rb), who essentially admitted they didn’t fully understand the theoretical contributions, and yet still gave definitive scores. Let's pause here: if you're not sure about your own judgment, how can you justify a 2?
Ra: “Seems correct, but theory isn’t my main area.”
Rb: “Seems correct, but I didn’t check carefully.”
That’s already shaky. But it gets worse.
After a decent rebuttal effort, addressing Rb’s demands and running additional experiments, Rb acknowledges that their initial concerns were “unreasonable,” but then shifts the goalposts. Now the complaint is lack of SOTA performance. How convenient. Ra follows suit by quoting Rb, who just admitted they were wrong, and further downgrades the work as “marginal” because SOTA wasn’t reached in absolute terms.
This is like trying to win a match where the referee changes the rules midway — and then quotes the other referee’s flawed call as justification.
Rb’s Shapeshifting Demands: From Flawed to Absurd
After requesting fixes to experiments that were already justified, Rb asks for even more — including experiments on a terabyte-scale dataset.
Reminder: this is an academic conference, not a hyperscale startup. The author clearly explains the compute budget constraint, and even links to previous OpenReview threads where such experiments were already criticized. Despite this, Rb goes silent after getting additional experiments done.
Ra, having access to these new results, still cites Rb’s earlier statement (yes, the one Rb backtracked from), calling the results "edge-case SOTA" and refusing to adjust the score.
Imagine that: a reviewer says, “I don’t fully understand your method,” then quotes another reviewer who admitted they were wrong, and uses that to justify rejecting your paper.
Rebuttal Becomes a Farce
The third reviewer, Rc, praises the rebuttal but still refuses to adjust the score because “others had valid concerns.” So now we’re in full-on consensus laundering, where no single reviewer takes full responsibility, but all use each other’s indecisiveness as cover.
This is what rebuttals often become: not a chance to clarify, but a stress test to see whether the paper survives collective reviewer anxiety and laziness.
The Real Cost: Mental Health and Career Choices
What hits hardest is the closing reflection:
"A self-funded GPU, is it really enough to paddle to publication?"
That line broke me. Because many of us have wondered the same. How many brilliant, scrappy researchers (operating on shoestring budgets, relying on 1 GPU and off-hours) get filtered out not because of lack of ideas, but because of a system designed around compute privilege, reviewer roulette, and metrics worship?
The author says they're done. They're choosing to leave academia after a series of similar outcomes. And to be honest, I can't blame them.
A Final Note: What’s Broken Isn’t the Review System — It’s the Culture Around It
It’s easy to say "peer review is hard" or "everyone gets bad reviews." But this case isn’t just about a tough review. It’s about a system that enables vague criticisms, shifting reviewer standards, and a lack of accountability.
If we want to keep talent like the sharing author in the field, we need to:
- Reassign reviewers when they admit they're out-of-domain.
- Disallow quoting other reviewers as justification.
- Add reviewer accountability (maybe even delayed identity reveal).
- Allow authors to respond once more if reviewers shift arguments post-rebuttal.
- Actually reduce the bureaucratic burden of reviewing.
To XY天下第一漂亮 — thank you for your courage. This post is more than a rant. It’s a mirror.
And yes, in today’s ML publishing world:
Money, GPUs, Pre-train, SOTA, Fake results, and Baseline cherry-picking may be all you need; but honesty and insight like yours are what we actually need. -
When “You answered my questions” somehow translates to “Still a weak reject.”
Attached is a classic case of “Thanks, but no thanks” review logicEven when your method avoids combinatorial explosion and enables inference-time tuning… innovation apparently just isn’t innovative enough?
Peer review or peer roulette?
-
Just wanted to casually share a couple of ICML rebuttal stories I came across recently. If you're going through the process too, maybe these resonate.
Case 1: Maxwell - When the System Feels Broken
Maxwell submitted three papers this year. Got ten reviewers in total. So far? All ten acknowledged the rebuttal, but zero replied, zero changed scores. Even worse, he says at least two reviews look like they were written entirely by ChatGPT. He tried reaching out to the Area Chair, but got radio silence.
He also reflected on some bigger issues with ML conference peer review in general. In his words, the bar is too low. There's no desk reject phase like in journals, and papers can be endlessly recycled, so a lot of half-baked submissions flood in hoping to hit the jackpot.
Maxwell suggested a few radical ideas:
- Fix the number of accepted papers per year (e.g., cap it at 2500).
- Only the top 5000 submissions (or 50% of total) go to actual review.
- Introduce a yearly desk reject quota per author. If you get desk rejected 6 times, you're banned from submitting to the top conferences for that year.
Yeah, this would definitely stir things up and reduce the number of papers, but he argues it's the only way to fix quality and reviewer burnout. Tough love?
Case 2: 神仙 - Reviewer Missed the Point
神仙 had a different kind of frustration. His paper proposed a method to accelerate a system under a given scenario (let’s call it scenario A). But multiple reviewers fixated on why A was designed the way it was, not on the actual method.
This was super frustrating since A was just a fixed example, not the subject of the paper. So he’s planning to carefully explain this misunderstanding in the rebuttal and hopes it will help increase the score.
He also noted a pattern: reviewers who gave low scores are more responsive in rebuttal, while those who gave high scores often don't even acknowledge. Anyone else notice this trend?
Some of my thoughts
One story points to deeper structural problems in the review system. The other is just classic misalignment between what authors write and what reviewers pick up on.
Whether you're running into AI-generated reviews, silent ACs, or just misunderstood contributions, you're not alone.
Curious to hear if others had similar experiences. Rebuttal season is rough.
Good luck everyone.
-
I hereby post the historical acceptance rate of ICML:
Conference Acceptance Rate and Stats ICML'14 15.0% (Cycle I), 22.0% (Cycle II) ICML'15 26.0% (270/1037) ICML'16 24.0% (322/?) ICML'17 25.9% (434/1676) ICML'18 25.1% (621/2473) ICML'19 22.6% (773/3424) ICML'20 21.8% (1088/4990) ICML'21 21.5% (1184/5513) (166 long talks, 1018 short talks) ICML'22 21.9% (1235/5630) (118 long talks, 1117 short talks) ICML'23 27.9% (1827/6538) (158 live orals, 1669 virtual orals with posters) ICML'24 27.5% (2610/9473) (144 orals, 191 spotlights and 2275 posters)