The ICML'25 Review Disaster: "What Does 'k' in k-NN Mean?" 😱
-
The recent ICML 2025 review cycle has sparked outrage and dark humor across the ML community. Here’s a compilation of jaw-dropping anecdotes from Zhihu (China’s Quora) exposing the chaos, ranging from clueless reviewers to systemic failures.
Buckle up!
1. The "k-NN" Incident
User "写条知乎混日子" dropped this gem:
"The reviewer asked me: ‘What does the ‘k’ in k-NN stand for?’"
Yes, a reviewer at ICML, a top-tier ML conference, needed clarification on one of the most basic ML concepts.
2. The "Pro vs. RPO" Mix-Up
User "CpGD7" shared:
"The reviewer misread ‘rpo’ as ‘pro’ and questioned why our ‘advanced version’ lost to baselines. Next time, should I rename my main experiments ‘Promax’ to get accepted?"
3. The "I Didn’t Have Time to Check Proofs" Confession
User "虚无", a reviewer, admitted:
"I got assigned 5 theoretical papers. Checking proofs properly takes 7–10 days per paper. I only had time to verify the first two; the rest got high scores based on ‘intuition’ because I couldn’t validate the math."
This raises a serious ethical concern: Papers are being accepted/rejected based on guesses, not rigor.
4. The "Citation Mafia" Reviewer
User "Jane" reported:
"A 1-star reviewer demanded we cite 7 unrelated papers — 6 of which were by the same author. We withdrew the submission."
5. The "I Review Papers in a Field I Don’t Understand" Dilemma
User "Highlight" (a biochemist) was roped into reviewing ML theory:
"I’m from a biochemistry background. They assigned me 5 ML papers. I’m scrambling to understand the math over the weekend. They must be desperate."
6. The "R is Not the Real Numbers" Debacle
User "better" vented:
"A reviewer complained: ‘What is 𝐑? You never defined it. It can’t possibly mean the real numbers!’ …What else would it be?!"
7. The "Dataset Police" Strike Again
User "Reicholguk" faced this absurdity:
"A reviewer demanded we test on ‘popular’ datasets like Cora/Citeseer, ignoring that we already used Amazon Computer and Roman Empire graphs (which are standard in our subfield). Is this reviewer an AI? Even AI wouldn’t be this clueless."
8. The "I’ll Just Give Random Scores" Strategy
Many users reported suspiciously patterned scores:
- "877129391241": "One of my papers got no reviews at all (blank). Another got all 1s and 2s."
- "陈靖邦": "Got 4443 after an ICLR desk reject. Is this luck or a sign reviewers just clicked randomly?"
Why This Matters
These aren’t just funny fails, they reveal deep flaws in peer review:
- Overworked reviewers (5+ papers, no opt-out).
- Mismatched expertise (biochemists judging theory).
- Lazy/bad-faith reviews (no comments, citation demands).
- Systemic randomness (scores with no justification).
As User "虚无" warned:
"If ICML keeps this up, no serious researcher will want to submit or review."
The Big Question
Should top conferences like ICML:
Cap reviewer workloads?
Allow expertise-based opt-outs?
Penalize low-effort reviews?
What’s your take? Share your worst review horror stories below!
(Sources: Zhihu users "877129391241", "虚无", "CpGD7", "陈靖邦", "Jane", "Highlight", "better", "Reicholguk", "写条知乎混日子". Original posts here.)