📢 NeurIPS 2025 Submission Deadlines Are Approaching Soon & Review Policy Highlights
-
Submitted an abstract today and guess what? the submission ID is already 11k+
I mean still 3 days till the abstract ddl!
-
My submission got 125xx already, submitted like 1 hour ago!
-
We also have a Dataset & Benchmark track paper submitted, got >1k ID already.
-
Finally figured out where the flood of NeurIPS 2025 submissions came from — the legendary "Fibonacci-style" submission strategy is real...
At first, I thought the spike in NeurIPS reviewer registrations was just everyone jumping in to do some reviewing
. But after going through my bids recently, I noticed several papers that were rejected from ICML showing up again — with exactly the same titles and abstracts. This world really is small…
For those unfamiliar, “bidding” is the process where reviewers select papers they’re interested in reviewing. If you skip bidding, the system randomly assigns you papers. Since all the major conferences use OpenReview now, this kind of overlap is inevitable. The same matching system that assigned you a paper at ICML is likely to send it your way again at NeurIPS.
My Review Criteria
Quick share of how I usually rate papers:
- If the paper has a clear motivation,
- the method matches the motivation, and
- the experiments are solid and effective,
then that’s a Weak Accept or better from me.
Common tiny little you would not believe how it actually influence the reviewers' emotion
Whether it’s NeurIPS, ICCV, or AAAI, I’ve noticed some recurring issues in many papers:
- Quotation marks misuse: Please use proper quotation marks (‘’), not double straight quotes.
- Missing experimental details: No GPU info, no hyperparameter settings, missing key reproducibility factors.
- Figure/Table separation: Figures on page 3 referenced only on page 6 — it’s a headache to track them down.
- Unexplained symbols in figures/tables: Sometimes you flip through multiple pages just to find a symbol definition.
- Broken or missing references: Tables or figures with
[?]
as references, or no “Table”/“Figure” prefix before them.
️ Some rebuttal tips for top conferences
Here’s a quick rebuttal survival guide based on experience:
-
Prioritize the most important issues
Tackle key concerns first, i.e., novelty, experimental validity, data issues. Addressing these upfront helps the AC and reviewers reassess the paper’s core value quickly. -
Respond to the underlying concern
Sometimes reviewer comments hint at deeper doubts. Read between the lines to get to the “real” issue and tailor your reply accordingly. -
Keep the tone constructive
Rebuttals shouldn’t feel combative. Use phrases like “We understand the reviewer’s concern about...” or “We conducted an additional experiment to clarify...” to maintain a dialogue-friendly tone. Even when faced with tough reviews, stay polite and show a willingness to improve.
And finally…
The most universal strategy for top-tier conferences?
Just get lucky.
-
I heard that it was 25k submission last year, and 16k valid submission after rebuttal, and the acceptance was 22%; This year, there are about 25K submitted in main track papers, 6K or so for dataset tracks, plus workshops, and the venue booked this year is for 20k people.
-
I hear during the reviewer bidding process, the organizer made a mistake revealing the entire paper (including appendix) to al reviewers.
-
Hey @magicparrots that sounds like a pretty serious slip if true
I would love to hear more if you have a source or screenshot though.