CVPR Reviewer Said: "This Work Isn't Fit for NeurIPS, Try CVPR Instead"
-
Author: Tianfan Xue (CUHK MMLab Assistant Professor)
Date: April 2, 2025
Reposted from: Xiaohongshu User Tianfan Xue’s Profile🤯 The Struggles of Peer Review: sharing some pain to ease yours
Let’s talk about some unbelievable peer review experiences I (and people I know) have encountered when submitting to CV/ML conferences. Consider this a bit of academic therapy.
You might find some humor here, but also some solace in knowing you're not alone.
Before diving into these stories, let me reiterate my stance: even among massive conferences like CVPR, ICCV, ECCV, the peer review process in computer vision is still one of the best across all fields. Most reviewers are serious and responsible. But in a field this large, occasional bad reviews are unavoidable.
So take these tales lightly, after all, science is full of uncertainty, and peer review is just one part of the journey. A good piece of work will shine eventually. For example, Prof. Xue's most cited paper, Vimeo90K, was rejected three times by NeurIPS, CVPR, and ECCV before finally landing in a journal.
Example 1: "Image Super-Resolution Is More Important Than Denoising"
We once submitted a paper on image denoising network design. One reviewer commented:
“Why do this experiment on image denoising? Why not test network efficiency on other tasks?”
Okay, fair enough. That’s somewhat constructive.
But then they added:
“Image denoising is not an important task; image super-resolution is.”
This is where it got ridiculous. That single sentence undermined the entire field of image denoising. Seriously?
Example 2: "What Is PSNR?"
A reviewer once asked:
“What is PSNR, and why didn’t you define it?”
From that point onward, I always made sure to write out:
PSNR (peak signal to noise ratio).It felt like being asked: “What is CNN, and why didn’t you define it?” in a deep learning paper...
Example 3: “Not a Significant Improvement”
In one paper, we did a user study comparing our method to a baseline. 87% of participants preferred ours.
The reviewer said:
“Improvement not significant.”
Come on! That’s a 6:1 ratio!
Would a football game need to end 4:0 for the win to be considered “clear”?We dug deeper. In another setting with 90% user preference, the reviewer still said the improvement was “not significant.”
Guess 87% to 90% just wasn’t enough.
Example 4: "Not NeurIPS Material, Try CVPR"
A friend submitted a paper to CVPR, and the reviewer wrote:
“This work is not suitable for NeurIPS. I suggest submitting to CVPR.”
Wait... it was submitted to CVPR.
To be clear: this was a CVPR reviewer saying this, suggesting... the paper be submitted to CVPR.
Make it make sense!
🫥 Example 5: "Your New Method Is Too Obvious"
We proposed a new image capture method that could improve image quality with proper post-processing.
The reviewer said:
“This paper makes no contribution. The results show that this processing improves image quality, but any method A would do the same.”
In short: You’re not wrong, you’re just too obvious.
🧠 Final Thoughts
The peer review process can be frustrating; but remember, you're not alone. Even good work sometimes gets caught in bad reviews. What matters most is persistence.
"Good research always finds its light."
So next time you get an absurd review, maybe just laugh it off... and keep going.
Please register (verified or anonymous) to join the discussion below!
-
Been there, felt that. Sometimes peer review feels more like roulette than rigor; but hey, good science endures beyond a stray reviewer’s “hot take.”