Skip to content
  • Discuss everything about peer review in computer science research: its successes, failures, and the challenges in between.

    9 14
    9 Topics
    14 Posts
    rootR
    Interesting research that got accepted by EMNLP 2023 findings.
  • Discuss peer review challenges in AI/ML research — submission, review quality, bias, and decision appeals at ICLR, ICML, NeurIPS, AAAI, IJCAI, AISTATS and COLT.

    12 33
    12 Topics
    33 Posts
    L
    Just noticed that ICML 2025 has taken a small but meaningful step toward OpenReview: the reviews of accepted papers will eventually be made public. While this isn't full-fledged open review yet, it's a clear signal that change is coming. As a reviewer myself, I felt overwhelmed by the sheer volume of submissions this year. Unfortunately, I felt a noticeable drop in quality. Some papers were clearly submitted in a "let's try our luck" fashion. In this context, I sincerely hope that top AI/ML conferences will eventually follow ICLR's model and adopt fully open peer review. Why Open Review Matters For Reviewers: Knowing that reviews will be public adds a layer of accountability. It encourages more thoughtful, constructive, and responsible feedback. No more careless 1-scores or copy-pasted comments. For Authors: When reviews are public, authors will think twice before submitting undercooked ideas. Fear of negative reviews being visible online can act as a natural filter to avoid "lottery-style" submissions. For the Community: Public reviews help newcomers learn how to write better papers and better reviews. It also reduces the burden on reviewers caused by the "Fibonacci submission strategy" (endless revise-and-resubmits across top venues), and ultimately improves the quality of accepted papers. Final Thoughts Open review isn't a silver bullet, but in this era of exploding submission numbers, it’s a change worth pursuing. I hope to see more top-tier conferences move toward transparent and accountable reviewing, bringing the focus back to research quality, not just acceptance rates.
  • Discuss peer review challenges, submission experiences, decision fairness, reviewer quality, and biases at CVPR, ICCV, ECCV, VR, SIGGRAPH, EUROGRAPHICS, ICRA, IROS, RSS etc.

    2 2
    2 Topics
    2 Posts
    riverR
    CVPR 2025 has introduced new policies to address the issue of irresponsible reviewing. Under the new guidelines, reviewers who fail to submit timely and thorough reviews may have their own paper submissions desk-rejected at the discretion of the Program Chairs. This move aims to enhance the quality and fairness of the peer-review process. In a recent announcement, Area Chairs (ACs) of CVPR 2025 identified a number of highly irresponsible reviewers, those who either abandoned the review process entirely or submitted egregiously low-quality reviews, including some generated by large language models (LLMs). Following a thorough investigation, the Program Chairs (PCs) decided to desk-reject 19 papers authored by confirmed highly irresponsible reviewers, which would have been accepted otherwise, in accordance with the previously communicated CVPR 2025 policies. The affected authors have been informed of this decision. This action underscores CVPR's commitment to maintaining high standards in academic publishing. While some may view this collective accountability as controversial, many in the research community support these measures as essential for upholding the integrity of the conference. These policies reflect a broader trend in the academic community toward holding reviewers accountable for their contributions to the peer-review process. By ensuring that reviewers provide timely and constructive feedback, CVPR aims to foster a more equitable and rigorous academic environment.
  • Discuss peer review, submission experiences, and decision challenges for NLP research at ACL, EMNLP, NAACL, and COLING.

    4 8
    4 Topics
    8 Posts
    cqsyfC
    See here for a crowd-sourced score distribution (biased ofc): https://papercopilot.com/statistics/acl-statistics/acl-2025-statistics/ [image: 1743634054428-screenshot-2025-04-03-at-00.47.22.png]
  • SIGKDD, SIGMOD, ICDE, CIKM, WSDM, VLDB, ICDM and PODS

    2 2
    2 Topics
    2 Posts
    rootR
    Hey fellow KDD authors and ML researchers! [image: 1743625304323-screenshot-2025-04-02-at-22.21.32.png] As we approach the release of the second-round review results for KDD 2025, it's time to gather, share experiences, and support each other: whether you're nervously checking for updates, celebrating a win, or figuring out your next steps. 🧩 What to Discuss Here: What scores did you get after Round 1 & rebuttal? Did your rebuttal help push your paper toward acceptance? Were you assigned to Research or ADS track? Are you seeing trends in novelty / technical quality (TQ) scores? What are your thoughts on this year's review quality? ️ Be cautious with rebuttals Some authors reported issues with anonymous external code links (e.g., GitHub repos). Even if anonymized, external links in the rebuttal can trigger desk rejection depending on PC interpretation. If you’re unsure, it’s safest to: Clearly reference what's already in the submission Avoid linking out to anything not explicitly allowed Clarify any confusion in the rebuttal without adding new external content Community Polls & Stats Some have started collecting anonymized data points on scores and acceptance results; it's great to get a sense of where you stand. If you’ve got numbers (e.g., Novelty: 4 3 3 2, TQ: 3 2 3 2), feel free to drop them in the thread and compare notes! Let’s try to keep this thread constructive and supportive. Every score is a story, and every rejection can be a redirection. Looking Ahead Whether you’re aiming to get into camera-ready or preparing a resubmission, this is the perfect moment to share, learn, and connect with others in the same boat. Feel free to comment below with your situation, ask questions, or just vent — we’re here for it! Stay strong, and good luck to everyone — A fellow author + reviewer
  • ICSE, OSDI, SOSP, POPL, PLDI, FSE/ESEC, ISSTA, OOPSLA and ASE

    1 1
    1 Topics
    1 Posts
    S
    CCF Recommendation Conference Deadline Transferred from ccf-deadlines CCF-A October 13 - 16, 2025 Lotte Hotel World, Seoul, Republic of Korea ACM Symposium on Operating Systems Principles Deadline: Fri Apr 18th 2025 08:59:59 BST (2025-04-17 23:59:59 UTC-8) Website of SOSP CCF-B October 12-18, 2025 Singapore International Static Analysis Symposium Deadline: Mon May 5th 2025 12:59:59 BST (2025-05-04 23:59:59 UTC-12) Website of SAS CCF-A November 16 - November 20, 2025 Seoul, South Korea International Conference on Automated Software Engineering Deadline: Sat May 31st 2025 12:59:59 BST (2025-05-30 23:59:59 UTC-12) Website of ASE CCF-B October 12-18, 2025 Singapore International Conference on Function Programming Deadline: Fri Jun 13th 2025 12:59:59 BST (2025-06-12 23:59:59 UTC-12) Website of ICFP CCF-A April 12-18, 2026 Rio De Janeiro, Brazil International Conference on Software Engineering Deadline: Sat Jul 19th 2025 12:59:59 BST (2025-07-18 23:59:59 UTC-12) Website of ICSE
  • HCI, CSCW, UniComp, UIST, EuroVis and IEEE VIS

    0 0
    0 Topics
    0 Posts
    No new posts.
  • Anything around peer review for conferences such as SIGIR, WWW, ICMR, ICME, ECIR, ICASSP and ACM MM.

    1 1
    1 Topics
    1 Posts
    riverR
    Recently, someone surfaced (again) a method to query the decision status of a paper submission before the official release for ICME 2025. By sending requests to a specific API (https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)) endpoint in the CMT system, one can see the submission status via a StatusId field, where 1 means pending, 2 indicates acceptance, and 3 indicates rejection. This trick is not limited to ICME 2025. It appears that the same method can be applied to several other conferences, including: IJCAI, ICME, ICASSP, IJCNN and ICMR. However, it is important to emphasize that using this technique violates the fairness and integrity of the peer-review process. Exploiting such a loophole undermines the confidentiality and impartiality that are essential to academic evaluations. This is a potential breach of academic ethics, and an official fix is needed to prevent abuse. Below is a simplified Python script that demonstrates how this status monitoring might work. Warning: This code is provided solely for educational purposes to illustrate the vulnerability. It should not be used to bypass proper review procedures. import requests import time import smtplib from email.mime.text import MIMEText from email.header import Header import logging # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler("submission_monitor.log"), logging.StreamHandler() ] ) # List of submission URLs to monitor (replace 'Your_paper_id' accordingly) SUBMISSION_URLS = [ "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)", "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)" ] # Email configuration (replace with your actual details) EMAIL_CONFIG = { "smtp_server": "smtp.qq.com", "smtp_port": 587, "sender": "your_email@example.com", "password": "your_email_password", "receiver": "recipient@example.com" } def get_status(url): """ Check the submission status from the provided URL. Returns the status ID and a success flag. """ try: headers = { 'User-Agent': 'Mozilla/5.0', 'Accept': 'application/json', 'Referer': 'https://cmt3.research.microsoft.com/ICME2025/', # Insert your cookie here after logging in to CMT 'Cookie': 'your_full_cookie' } response = requests.get(url, headers=headers, timeout=30) if response.status_code == 200: data = response.json() status_id = data.get("StatusId") logging.info(f"URL: {url}, StatusId: {status_id}") return status_id, True else: logging.error(f"Failed request. Status code: {response.status_code} for URL: {url}") return None, False except Exception as e: logging.error(f"Error while checking status for URL: {url} - {e}") return None, False def send_notification(subject, message): """ Send an email notification with the provided subject and message. """ try: msg = MIMEText(message, 'plain', 'utf-8') msg['Subject'] = Header(subject, 'utf-8') msg['From'] = EMAIL_CONFIG["sender"] msg['To'] = EMAIL_CONFIG["receiver"] server = smtplib.SMTP(EMAIL_CONFIG["smtp_server"], EMAIL_CONFIG["smtp_port"]) server.starttls() server.login(EMAIL_CONFIG["sender"], EMAIL_CONFIG["password"]) server.sendmail(EMAIL_CONFIG["sender"], [EMAIL_CONFIG["receiver"]], msg.as_string()) server.quit() logging.info(f"Email sent successfully: {subject}") return True except Exception as e: logging.error(f"Failed to send email: {e}") return False def monitor_submissions(): """ Monitor the status of submissions continuously. """ notified = set() logging.info("Starting submission monitoring...") while True: for url in SUBMISSION_URLS: if url in notified: continue status, success = get_status(url) if success and status is not None and status != 1: email_subject = f"Submission Update: {url}" email_message = f"New StatusId: {status}" if send_notification(email_subject, email_message): notified.add(url) logging.info(f"Notification sent for URL: {url} with StatusId: {status}") if all(url in notified for url in SUBMISSION_URLS): logging.info("All submission statuses updated. Ending monitoring.") break time.sleep(60) # Wait for 60 seconds before checking again if __name__ == "__main__": monitor_submissions() Parting thoughts While the discovery of this loophole may seem like an ingenious workaround, it is fundamentally unethical and a clear violation of the fairness expected in academic peer review. Exploiting such vulnerabilities not only compromises the integrity of the review process but also undermines the trust in scholarly communications. We recommend the CMT system administrators to implement an official fix to close this gap. The academic community should prioritize fairness and the preservation of rigorous, unbiased review standards over any short-term gains that might come from exploiting such flaws.
  • Anonymously share data, results, or materials. Useful for rebuttals, blind submissions and more. Only unverified users can post (and edit or delete anytime afterwards).

    2 2
    2 Topics
    2 Posts
    M
    Table 1. Performance Comparison on Benchmark Datasets Method CIFAR-10 (Acc %) CIFAR-100 (Acc %) TinyImageNet (Acc %) Params (M) FLOPs (G) ResNet-18 94.5 76.3 64.1 11.2 1.8 ViT-Small 95.2 77.9 65.7 21.7 4.6 Ours (GraphFormer) 96.1 79.5 67.3 19.8 3.9 Table 2. Ablation Study on Temporal Encoding Method Variant CIFAR-10 (Acc %) TinyImageNet (Acc %) Ours (No Time Encoding) 95.4 66.1 Ours (Sinusoidal Only) 95.8 66.8 Ours (Learnable Time) 96.1 67.3 Table 3. Robustness to Input Noise on CIFAR-10 (% Accuracy) Method No Noise Gaussian (σ=0.1) Gaussian (σ=0.3) FGSM (ε=0.1) ResNet-18 94.5 91.2 83.7 78.9 ViT-Small 95.2 92.4 85.1 80.3 Ours 96.1 93.5 87.0 83.7 Table 4. Generalization to Out-of-Distribution (OOD) Data Method In-Domain (CIFAR-10) OOD (SVHN) OOD (CIFAR-10-C) ResNet-18 94.5 76.3 71.2 ViT-Small 95.2 78.1 73.0 Ours 96.1 80.5 75.3
Popular Tags