Skip to content

Peer Review in Computer Science: good, bad & broken

Discuss everything about peer review in computer science research: its successes, failures, and the challenges in between.

This category can be followed from the open social web via the handle cs-peer-review-general@cspaper.org:443

31 Topics 73 Posts

Subcategories


  • Discuss peer review challenges in AI/ML research — submission, review quality, bias, and decision appeals at ICLR, ICML, NeurIPS, AAAI, IJCAI, AISTATS and COLT.

    13 38
    13 Topics
    38 Posts
    rootR
    @cocktailfreedom check this writeup out: https://cspaper.org/topic/39/the-icml-auto-acknowledge-cycle-a-dark-satire [image: 1743801222793-auto-ack-reviewer-17-haha.png]
  • Discuss peer review challenges, submission experiences, decision fairness, reviewer quality, and biases at CVPR, ICCV, ECCV, VR, SIGGRAPH, EUROGRAPHICS, ICRA, IROS, RSS etc.

    2 2
    2 Topics
    2 Posts
    riverR
    CVPR 2025 has introduced new policies to address the issue of irresponsible reviewing. Under the new guidelines, reviewers who fail to submit timely and thorough reviews may have their own paper submissions desk-rejected at the discretion of the Program Chairs. This move aims to enhance the quality and fairness of the peer-review process. In a recent announcement, Area Chairs (ACs) of CVPR 2025 identified a number of highly irresponsible reviewers, those who either abandoned the review process entirely or submitted egregiously low-quality reviews, including some generated by large language models (LLMs). Following a thorough investigation, the Program Chairs (PCs) decided to desk-reject 19 papers authored by confirmed highly irresponsible reviewers, which would have been accepted otherwise, in accordance with the previously communicated CVPR 2025 policies. The affected authors have been informed of this decision. This action underscores CVPR's commitment to maintaining high standards in academic publishing. While some may view this collective accountability as controversial, many in the research community support these measures as essential for upholding the integrity of the conference. These policies reflect a broader trend in the academic community toward holding reviewers accountable for their contributions to the peer-review process. By ensuring that reviewers provide timely and constructive feedback, CVPR aims to foster a more equitable and rigorous academic environment.
  • Discuss peer review, submission experiences, and decision challenges for NLP research at ACL, EMNLP, NAACL, and COLING.

    4 8
    4 Topics
    8 Posts
    cqsyfC
    See here for a crowd-sourced score distribution (biased ofc): https://papercopilot.com/statistics/acl-statistics/acl-2025-statistics/ [image: 1743634054428-screenshot-2025-04-03-at-00.47.22.png]
  • SIGKDD, SIGMOD, ICDE, CIKM, WSDM, VLDB, ICDM and PODS

    2 9
    2 Topics
    9 Posts
    riverR
    I made a summary of data points from KDD 2025 1st round results: Novelty Scores Technical Quality Scores Confidence Scores Rebuttal Outcome Final Decision Notes 3 3 3 3 3 3 4 3 3 2 3 2 – Addressed issues Accepted "Rebuttal 一波三折太难了" 2 2 3 2 2 3 3 2 2 3 3 3 3 3 3 Submitted Rejected "是不是可以直接跑路了" 4 3 3 1 4 4 2 2 – Explained issues Rejected "Large variance across reviewers; no score changes post-rebuttal" 3 3 3 3 3 2 – Unsure 🟡 Unknown "Still considering rebuttal; not sure if it's worth the effort" 3 3 3 3 3 3 3 3 3 3 3 2 – Minor clarifications Accepted "Final scores unchanged but accepted after positive AC decision" 3 4 3 3 3 3 2 2 3 2 2 3 – Clarified results Rejected "Novelty OK, but TQ too weak; didn't convince reviewers" 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 Submitted Accepted "Strong consensus; one of the smoother cases" 3 3 3 3 3 2 – No rebuttal Rejected "No rebuttal submitted; borderline scores" 3 3 2 2 3 3 2 2 – Rebuttal sent Rejected "Reviewers did not change their opinion" 3 3 3 3 3 3 3 3 3 3 2 2 – Rebuttal helped Accepted "Accepted despite one weaker reviewer" 3 3 3 3 3 3 3 3 3 3 3 3 Rebuttal sent 🟡 Unknown "In limbo; waiting for final decision" 3 3 3 3 2 2 2 2 – Not convincing Rejected "Work deemed not ‘KDD-level’ despite rebuttal" 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 Submitted Accepted "Perfectly consistent reviewers; smooth acceptance" 3 3 3 2 3 3 2 2 – Rebuttal failed Rejected "Low technical quality and variance led to rejection" Note: Data sourced from community discussions on Zhihu, Reddit, and OpenReview threads. Subject to sample bias.
  • ICSE, OSDI, SOSP, POPL, PLDI, FSE/ESEC, ISSTA, OOPSLA and ASE

    1 1
    1 Topics
    1 Posts
    S
    CCF Recommendation Conference Deadline Transferred from ccf-deadlines CCF-A October 13 - 16, 2025 Lotte Hotel World, Seoul, Republic of Korea ACM Symposium on Operating Systems Principles Deadline: Fri Apr 18th 2025 08:59:59 BST (2025-04-17 23:59:59 UTC-8) Website of SOSP CCF-B October 12-18, 2025 Singapore International Static Analysis Symposium Deadline: Mon May 5th 2025 12:59:59 BST (2025-05-04 23:59:59 UTC-12) Website of SAS CCF-A November 16 - November 20, 2025 Seoul, South Korea International Conference on Automated Software Engineering Deadline: Sat May 31st 2025 12:59:59 BST (2025-05-30 23:59:59 UTC-12) Website of ASE CCF-B October 12-18, 2025 Singapore International Conference on Function Programming Deadline: Fri Jun 13th 2025 12:59:59 BST (2025-06-12 23:59:59 UTC-12) Website of ICFP CCF-A April 12-18, 2026 Rio De Janeiro, Brazil International Conference on Software Engineering Deadline: Sat Jul 19th 2025 12:59:59 BST (2025-07-18 23:59:59 UTC-12) Website of ICSE
  • HCI, CSCW, UniComp, UIST, EuroVis and IEEE VIS

    0 0
    0 Topics
    0 Posts
    No new posts.
  • Anything around peer review for conferences such as SIGIR, WWW, ICMR, ICME, ECIR, ICASSP and ACM MM.

    1 1
    1 Topics
    1 Posts
    riverR
    Recently, someone surfaced (again) a method to query the decision status of a paper submission before the official release for ICME 2025. By sending requests to a specific API (https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)) endpoint in the CMT system, one can see the submission status via a StatusId field, where 1 means pending, 2 indicates acceptance, and 3 indicates rejection. This trick is not limited to ICME 2025. It appears that the same method can be applied to several other conferences, including: IJCAI, ICME, ICASSP, IJCNN and ICMR. However, it is important to emphasize that using this technique violates the fairness and integrity of the peer-review process. Exploiting such a loophole undermines the confidentiality and impartiality that are essential to academic evaluations. This is a potential breach of academic ethics, and an official fix is needed to prevent abuse. Below is a simplified Python script that demonstrates how this status monitoring might work. Warning: This code is provided solely for educational purposes to illustrate the vulnerability. It should not be used to bypass proper review procedures. import requests import time import smtplib from email.mime.text import MIMEText from email.header import Header import logging # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler("submission_monitor.log"), logging.StreamHandler() ] ) # List of submission URLs to monitor (replace 'Your_paper_id' accordingly) SUBMISSION_URLS = [ "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)", "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)" ] # Email configuration (replace with your actual details) EMAIL_CONFIG = { "smtp_server": "smtp.qq.com", "smtp_port": 587, "sender": "your_email@example.com", "password": "your_email_password", "receiver": "recipient@example.com" } def get_status(url): """ Check the submission status from the provided URL. Returns the status ID and a success flag. """ try: headers = { 'User-Agent': 'Mozilla/5.0', 'Accept': 'application/json', 'Referer': 'https://cmt3.research.microsoft.com/ICME2025/', # Insert your cookie here after logging in to CMT 'Cookie': 'your_full_cookie' } response = requests.get(url, headers=headers, timeout=30) if response.status_code == 200: data = response.json() status_id = data.get("StatusId") logging.info(f"URL: {url}, StatusId: {status_id}") return status_id, True else: logging.error(f"Failed request. Status code: {response.status_code} for URL: {url}") return None, False except Exception as e: logging.error(f"Error while checking status for URL: {url} - {e}") return None, False def send_notification(subject, message): """ Send an email notification with the provided subject and message. """ try: msg = MIMEText(message, 'plain', 'utf-8') msg['Subject'] = Header(subject, 'utf-8') msg['From'] = EMAIL_CONFIG["sender"] msg['To'] = EMAIL_CONFIG["receiver"] server = smtplib.SMTP(EMAIL_CONFIG["smtp_server"], EMAIL_CONFIG["smtp_port"]) server.starttls() server.login(EMAIL_CONFIG["sender"], EMAIL_CONFIG["password"]) server.sendmail(EMAIL_CONFIG["sender"], [EMAIL_CONFIG["receiver"]], msg.as_string()) server.quit() logging.info(f"Email sent successfully: {subject}") return True except Exception as e: logging.error(f"Failed to send email: {e}") return False def monitor_submissions(): """ Monitor the status of submissions continuously. """ notified = set() logging.info("Starting submission monitoring...") while True: for url in SUBMISSION_URLS: if url in notified: continue status, success = get_status(url) if success and status is not None and status != 1: email_subject = f"Submission Update: {url}" email_message = f"New StatusId: {status}" if send_notification(email_subject, email_message): notified.add(url) logging.info(f"Notification sent for URL: {url} with StatusId: {status}") if all(url in notified for url in SUBMISSION_URLS): logging.info("All submission statuses updated. Ending monitoring.") break time.sleep(60) # Wait for 60 seconds before checking again if __name__ == "__main__": monitor_submissions() Parting thoughts While the discovery of this loophole may seem like an ingenious workaround, it is fundamentally unethical and a clear violation of the fairness expected in academic peer review. Exploiting such vulnerabilities not only compromises the integrity of the review process but also undermines the trust in scholarly communications. We recommend the CMT system administrators to implement an official fix to close this gap. The academic community should prioritize fairness and the preservation of rigorous, unbiased review standards over any short-term gains that might come from exploiting such flaws.