π₯ NeurIPS2021 Experiment: Peer Review Still Noisy After 7 Years! π
-
The peer review process is the backbone of academic publishing β but how reliable is it, really?
NeurIPS 2021 revisited the interesting 2014 NeurIPS Experiment to measure how consistent the review process is today. The results? Still noisy, still unpredictable!
Key Findings:
- 23% of papers were accepted by one committee but rejected by another.
- 50.6% of accepted papers would have been rejected if reviewed by another committee.
- Over half of spotlights were rejected by the other committeeβraising questions about prestige paper selections.
- Increasing selectivity amplifies randomness, making the review process even more arbitrary.
- Despite a 5x increase in submissions since 2014, inconsistency remains unchanged.
These findings highlight the ongoing challenges in peer review for computer science research. Are we truly selecting the best work, or just rolling the dice?
What do you think? Join the discussion: how can we improve peer review in ML & CS research?