NeurIPS 2025 Review Released: Remember, a score is not your worth as a researcher!
-
"In every peer review cycle, some will win, some will lose, and most will need a group hug."
— Every CS author in late July
The Reviews Have Landed — Now What?
Today, the NeurIPS 2025 reviews have dropped for both the Main Track and Datasets & Benchmarks Track. Congratulations, commiserations, and caffeine to all! Whether you’re celebrating 6s or nursing some bruising 2s, the next week is your time to shine (or, at least, to clarify, correct, and convince).
Rebuttal period basics:
- Preliminary reviews are now available in your Author Console.
- You have until July 30th, 11:59pm AoE to submit your rebuttal.
- Reviewers see your responses starting July 31st.
- Discussion with reviewers: July 31st – August 6th, also AoE.
Golden Rules for Rebuttal:
- Be factual, focused, and respectful. Address factual errors, direct reviewer misunderstandings, and answer questions — don’t vent or include new experiments (save those for your next revision).
- Do not include: links to external pages, identifying information, or anything that violates the double-blind policy.
- If you get a low-quality, disrespectful, or inaccurate review: Use the “Author AC Confidential Comment” — but only for real issues, and proofread carefully.
- Public release: For accepted (and opt-in rejected) papers, your reviews and rebuttals will eventually go public. So, don’t say anything you wouldn’t want your PI, future employer, or your mom to read.
Pro tip: Don’t treat the average score as a crystal ball. ACs care more about review content and discussion than numbers alone.
Score Situation: What Are People Seeing?
This year, review scores feel lower than ever. Even the “inflated” public polls are showing a stark drop.
“Even with some upward bias in online voting, the proportion of papers with an average score below 3.5 is shockingly high. The days of 4 = borderline are over; now, 3 is the median.” — Xiaohongshu user
From the Xiaohongshu/WeChat stats (see image below in your post):
Score Range Number of community votes [4.33, 6] 117 [3.67, 4.33) 222 [3, 3.67) 324 [2, 3) 110 Interpretation:
- The most common range is [3, 3.67) — a “borderline” area that leaves authors anxious and ACs with tough choices.
- Less than 20% of authors are reporting means above 4.33.
Social media stories are awash with:
- Scores like 4/4/3/2 (“Should I rebut?”), 5/3/2/2, 4/4/4/2, or the heartbreaking all-2s.
- Anxiety about outlier reviewers (especially confident low scores).
- A common refrain: if your average is near 3, don't lose hope — but prepare a strong, factual rebuttal.
Reddit, Zhihu, and WeChat are full of batch statistics:
- “Chair total 14: 1 over 4.0, 4 between 3.0 and 4.0, 9 at or below 3.0.”
- “Scores 5/4/3/2 with one harsh but easily rebuttable review — do I have a shot?”
- Senior AC on X: “Out of my 100-paper batch: 1≥5.0, 6≥4.5, 11≥4.0, 25≥3.75, 42≥3.5. Naive cutoff for acceptance: ~3.75.”
In short:
- 3–4 is the new “limbo.”
- ≥4.5 is relatively rare, and usually a good sign.
- Outliers (both low and high) abound, and the rebuttal can make a difference if you’re close to the line.
Collected Score Table — 2025 Social Media Reports (up till when this article is written)
Here's a summary table of self-reported scores and cases seen on Chinese forums, Reddit, and WeChat:
Score Pattern Track Commentary 4/4/4/5 D&B/Main Excellent, but still nerves about outliers 5/3/2/2 Main Mixed; single strong positive, but two low (see if rebuttal helps) 4/4/3/2 Main Very common; hope if the “2” can be addressed 3/3/3/3 Main/D&B “Is this death?” — Actually, still possible with strong text 4/4/4/2 D&B Confident negative reviewer; focus rebuttal there 4/3/3/3 Main Borderline; ACs will look closely at review text 5/4/3/3/2 Main 5-reviewer paper; chance if low scores are weakly justified 2/2/2/2 Main Unlikely, but never zero hope 4/5/2/2 Main Outliers matter; address negatives, emphasize positives 4/3/2/5 Main Again, a split batch — rebuttal is key 3/3/4/4 Main With reviewer hints of raising score, good chance Trends from these numbers:
- Most papers cluster between 3.0 and 4.0.
- It’s rare but not impossible to get four “strong accepts.”
- Negative reviews (scores 1–2) are often outliers, sometimes given with high confidence, but not always fatal if addressed with a clear, factual rebuttal.
- “Rebut or run?” is the most common question, and the answer is: if you are above 2.5, you should rebut unless all reviewers are negative with strong justifications. The bar for “safe accept” is much higher than before.
🧐 The Author Survey — And Why It Matters
NeurIPS 2025 has rolled out an optional survey asking authors to rank their own submissions. This isn’t a trap — it’s part of a longer-term research initiative to improve the peer review process, not to influence this year’s outcome.
How does it work?
- You’re asked to honestly rank your own submissions by their scientific contribution.
- The survey explores the relationship between author perceptions, review scores, and future impact.
- The ultimate aim? To find out whether author-informed rankings can improve the reviewing and decision-making process, leveraging insights from the isotonic mechanism.
Privacy note: Your response is completely anonymous, not shared with ACs or reviewers, and will be deleted after the study finishes.
Echoing the CSPaper Mission
This is very much aligned with the mission of CSPaper Review:
- Bringing transparency and feedback early in the process
- Experimenting with ways to make reviewing more fair, consistent, and author-centric
- Building a community that shares insights, pain points, and best practices
CSPaper Review already served 7,000+ unique users, processed 15,000+ reviews, and directly addresses the exact kind of reviewing bottleneck and author pain the community faces right now.
Why Try CSPaper Review? A New Perspective
If you’re feeling battered by NeurIPS reviews — or just want a second opinion for your next round (ICLR, AAAI, EMNLP...), CSPaper Review offers:
- Fast, conference-specific peer reviews (within 1 minute)
- Benchmarked realism (using real review data from OpenReview/social media)
- Feedback on where your paper might stand in a different venue
- Useful for diagnosing common reviewer objections, clarifying writing, or testing rewrites before next submission
- Free to use (up to 20 reviews/day), no login required
Remember, tools like CSPaper Review won’t change your current NeurIPS fate, but they can help diagnose weaknesses, frame your rebuttal, and plan for future resubmission or revision.
CSPaper Team is also keen to learn how our tool matches the reviews you obtain, we welcome you to share with us or the community by emailing us support@cspaper.org or create a post here!
End Words: Keep Calm, Rebut Wisely, and Stay in the Game
To everyone feeling crushed, furious, or numb: You are not alone.
- Most people’s scores are “mid.”
- Outliers are everywhere.
- The review text matters more than the mean.
- “Borderline” is the new normal.
Do your best rebuttal — factual, concise, and polite.
Discuss with your co-authors, check the CSPaper Review tool for a fresh look, and don’t be afraid to ask your community for support or advice.
And remember: a score is not your worth as a researcher.May the review gods be with you. May your scores rise, and may you touch grass before the notifications drop.
— The CSPaper.org team
Now, go forth and rebut. Or, at least, go outside for five minutes — you earned it.
-
As a reviewer, I got messages from OpenReview about authors withdraw their submission upon seeing the reivew scores. The two withdrawn papers have scores as follows:
4,3,3,3
4,3,3,2I guess, mean score >=4 would be a good spot for a chance of acceptance? anything below could be kinda far from acceptance?
-
Who’s Adam? The Most Ridiculous NeurIPS Review Is Here
Have you received your NeurIPS 2025 review results yet? Brace yourselves—it’s officially open season for venting frustrations about reviewer comments!
Right on cue, we stumbled upon what might just be the funniest and most outrageous NeurIPS comment of the year. Credit goes to Yiping Lu, an assistant professor in Northwestern University’s Department of Industrial Engineering and Management Sciences and a proud Peking University alum.
Lu posted a screenshot of this legendary review on X (formerly Twitter), and it immediately went viral—over 505,000 views in less than a day! Make your guess is this review from LLM?
The Reviewer’s Baffling Comment
Line 336: “Both architectures are optimized with Adam.”
“Who/what is ‘Adam’? I think this is a very serious typo that the author should have removed from the submission.”Yes, folks, you heard it correctly. The reviewer, presumably an expert, genuinely believed “Adam”—one of the most fundamental optimization algorithms in machine learning—was either a typo or possibly a mysterious colleague named Adam.
Community Roast
Even Dan Roy, a respected professor and machine-learning researcher, couldn’t resist firing a shot:
“NeurIPS reviews are complete trash.”
🧨 The Deeper Issue: Quantity vs. Quality
With NeurIPS submissions skyrocketing toward 30,000 papers this year, it’s becoming glaringly obvious that human reviewers alone can’t handle the load. This mismatch between submission volumes and reviewer capacity inevitably leads to bizarre review outcomes like the infamous “Adam incident.”
Could AI Save Peer Review?
AI tools are already creeping into academic peer review. UC Berkeley postdoc Xuandong Zhao recently tweeted:
“#NeurIPS2025 reviews are out, and the authenticity of reviews surprises me again.Two years ago, maybe 1/10 felt AI-assisted. Now? It seems 9/10 are AI-modified, going beyond simple grammar fixes to fully generated comments.
As someone researching AI-generated content detection, these might be impressions rather than hard data. Still, this trend worries me: AI writing papers, AI reviewing them, AI publishing them. Is this really the future of academia we want?”
From drafting manuscripts to reviewing and publishing, AI now permeates the entire academic pipeline.
What Now? It’s Rebuttal Season!
As funny as this review is, it’s still critical to tackle it seriously in your rebuttals. Thankfully, one helpful community member shared an invaluable resource from 2020:
Final Note
Don’t forget to share your story by contributing to our cspaper.org!
-
NeurIPS Author-Reviewer Discussion: Key Points
-
Purpose:
Only clarify reviewer questions. Do not add new arguments or extensive commentary. -
How to Respond:
- Use the “Official Comment” button for each discussion thread.
- Be brief and to the point.
- Do not ask or urge reviewers to reply; ACs/PCs will manage that.
-
Confidential Comments:
- Use “Author AC Confidential Comment” for private notes to ACs (not visible to reviewers).
- Available until Aug 6.
-
Anonymity & Conduct:
- Do not reveal your identity or use links.
- Remain respectful at all times.
-
Timeline:
- Discussion closes Aug 6, 11:59pm AoE.
- Reviewer final ratings/justifications are hidden until notification.
- Save reviews before Aug 6 if you plan to withdraw.
-
Navigation:
Use filters in OpenReview to view messages by author/reviewer/AC.
-