KDD 2024: AC's journey through frustration and reflection
-
I find a sharing post has a unique combination of author and reviewer for KDD 2024 conference, hence re-post the main narrative here.
Initially, "I" (the author) approached KDD 2024 enthusiastically, both as an author and a reviewer (Area Chair). Yet, the journey quickly turned into a profound lesson about the state of peer review.
Early on, as I started reviewing, I noticed significant discrepancies among reviewers. Out of the six papers I reviewed, five had received feedback from five to seven reviewers each, with opinions diverging drastically. I wondered how authors could adequately respond during rebuttal with such varying comments.
My own submission had its challenges. The first round of reviews came in unevenly:
Reviewer Scope Novelty Technical Quality Presentation Quality Reproducibility Confidence R1 4/4 3/7 3/7 2/3 2/3 4/4 R2 4/4 4/7 4/7 3/3 3/3 3/4 R3 4/4 4/7 6/7 3/3 3/3 3/4 R4 2/4 2/7 3/7 1/3 2/3 4/4 R5 4/4 6/7 5/7 3/3 3/3 4/4 R6 4/4 6/7 5/7 3/3 3/3 2/4 Notably, Reviewer 4 gave suspiciously
low scores, prompting concerns of potential malicious intent.
The rebuttal phase arrived, and I faced uncertainty. High-scoring reviewers promptly maintained their original scores, while those who gave average or low scores initially remained silent, creating anxiety. Especially troubling was the reviewer who had clearly scored maliciously low, later responding aggressively after being prompted, exacerbating my frustrations with a longer, harsher critique than the original review.
While reviewing other submissions, I identified concerning trends in KDD’s review process. Papers remotely similar to existing works were swiftly labeled as "poor novelty", and those without extensive mathematical derivations were marked as having poor technical soundness. KDD, a Data Mining conference, seemed to be applying overly stringent Machine Learning standards, signaling deeper issues within its reviewing culture.
After rebuttal concluded, none of the reviewers changed their scores. My hopes waned further upon discovering inconsistencies even among my own reviews. For example, two submissions I rated highly and were similarly praised by other reviewers, were undermined by just one harsh reviewer each, significantly impacting their fate.
On May 17, 2024, came the heartbreaking final update: my paper was ultimately rejected by the Senior Area Chair for "lack of novelty," alongside all six anomaly detection papers I reviewed. Disappointed and exhausted, I now advise aspiring researchers to reconsider their paths: perhaps shifting towards more foundational Machine Learning, away from the turbulence of anomaly detection and traditional Data Mining fields.
Reflecting on this journey, I remain hopeful. Someday, perhaps as an Area Chair myself, I'll better understand the motivations of certain "distinguished" reviewers. Until then, resilience remains crucial — but right now, it's time to take a break and perhaps shed a quiet tear.