Iād like to add by amplifying a few parts of the experience shared by XY天äøē¬¬äøę¼äŗ®, because it represents not just a āreview gone wrongā ā but a systemic breakdown in how feedback, fairness, and reviewer responsibility are managed at scale.
A Story of Two "2"s: When Reviews Become Self-Referential Echoes
The core absurdity here lies in the two low-scoring reviews (Ra and Rb), who essentially admitted they didnāt fully understand the theoretical contributions, and yet still gave definitive scores. Let's pause here: if you're not sure about your own judgment, how can you justify a 2?
Ra: āSeems correct, but theory isnāt my main area.ā
Rb: āSeems correct, but I didnāt check carefully.ā
Thatās already shaky. But it gets worse.
After a decent rebuttal effort, addressing Rbās demands and running additional experiments, Rb acknowledges that their initial concerns were āunreasonable,ā but then shifts the goalposts. Now the complaint is lack of SOTA performance. How convenient. Ra follows suit by quoting Rb, who just admitted they were wrong, and further downgrades the work as āmarginalā because SOTA wasnāt reached in absolute terms.
This is like trying to win a match where the referee changes the rules midway ā and then quotes the other refereeās flawed call as justification.
Rbās Shapeshifting Demands: From Flawed to Absurd
After requesting fixes to experiments that were already justified, Rb asks for even more ā including experiments on a terabyte-scale dataset.
Reminder: this is an academic conference, not a hyperscale startup. The author clearly explains the compute budget constraint, and even links to previous OpenReview threads where such experiments were already criticized. Despite this, Rb goes silent after getting additional experiments done.
Ra, having access to these new results, still cites Rbās earlier statement (yes, the one Rb backtracked from), calling the results "edge-case SOTA" and refusing to adjust the score.
Imagine that: a reviewer says, āI donāt fully understand your method,ā then quotes another reviewer who admitted they were wrong, and uses that to justify rejecting your paper.
Rebuttal Becomes a Farce
The third reviewer, Rc, praises the rebuttal but still refuses to adjust the score because āothers had valid concerns.ā So now weāre in full-on consensus laundering, where no single reviewer takes full responsibility, but all use each otherās indecisiveness as cover.
This is what rebuttals often become: not a chance to clarify, but a stress test to see whether the paper survives collective reviewer anxiety and laziness.
The Real Cost: Mental Health and Career Choices
What hits hardest is the closing reflection:
"A self-funded GPU, is it really enough to paddle to publication?"
That line broke me. Because many of us have wondered the same. How many brilliant, scrappy researchers (operating on shoestring budgets, relying on 1 GPU and off-hours) get filtered out not because of lack of ideas, but because of a system designed around compute privilege, reviewer roulette, and metrics worship?
The author says they're done. They're choosing to leave academia after a series of similar outcomes. And to be honest, I can't blame them.
A Final Note: Whatās Broken Isnāt the Review System ā Itās the Culture Around It
Itās easy to say "peer review is hard" or "everyone gets bad reviews." But this case isnāt just about a tough review. Itās about a system that enables vague criticisms, shifting reviewer standards, and a lack of accountability.
If we want to keep talent like the sharing author in the field, we need to:
Reassign reviewers when they admit they're out-of-domain.
Disallow quoting other reviewers as justification.
Add reviewer accountability (maybe even delayed identity reveal).
Allow authors to respond once more if reviewers shift arguments post-rebuttal.
Actually reduce the bureaucratic burden of reviewing.
To XY天äøē¬¬äøę¼äŗ® ā thank you for your courage. This post is more than a rant. Itās a mirror.
And yes, in todayās ML publishing world:
Money, GPUs, Pre-train, SOTA, Fake results, and Baseline cherry-picking may be all you need; but honesty and insight like yours are what we actually need.