Skip to content
  • Discuss everything about peer review in computer science research: its successes, failures, and the challenges in between.

    8 12
    8 Topics
    12 Posts
    rootR
    We’re excited to introduce a new category on cspaper.org: Anonymous Sharing & Supplementary Materials Purpose This category is designed specifically for anonymized sharing of supplementary materials such as: Additional experimental results or figures Extended ablation studies Links to datasets or demos Supplementary explanations that didn’t fit in the main paper It’s especially useful during rebuttals, when you may need to share extra content with reviewers but remain compliant with anonymity and strict page limits. How It Works Only unverified (but registered) users can post in this category: make sure to skip filling your email during registration. You can edit or delete your post anytime Use a username that doesn’t reveal your identity Share the link to your anonymous supplementary materials post/topic This keeps everything in line with double-blind peer review policies while giving you a legitimate way to share supporting materials. ️ Stay Anonymous Please do not use real names or affiliations when posting. If you're unsure how to create an anonymous username, try one of these generators: Jimpix Random Username Generator UsernameGenerator.com NordPass Username Generator Dashlane Username Generator Example Use Case You're writing the rebuttal for your paper submitted and reviewed by NeurIPS/ICML/ACL. You need to trim down your rebuttal to fit the rebuttal length limit. But, you want to share detailed results or code (causing violation of rebuttal length) with reviewers during the rebuttal. Just post them anonymously in that category and include the link in your response! Pro Tip you can make your url even shorter by leveraging services like TinyURL and Bitly. Start Posting Visit the category here and share your materials: https://cspaper.org/category/10/anonymous-sharing-supplementary-materials
  • Discuss peer review challenges in AI/ML research — submission, review quality, bias, and decision appeals at ICLR, ICML, NeurIPS, AAAI, IJCAI, AISTATS and COLT.

    12 33
    12 Topics
    33 Posts
    L
    Just noticed that ICML 2025 has taken a small but meaningful step toward OpenReview: the reviews of accepted papers will eventually be made public. While this isn't full-fledged open review yet, it's a clear signal that change is coming. As a reviewer myself, I felt overwhelmed by the sheer volume of submissions this year. Unfortunately, I felt a noticeable drop in quality. Some papers were clearly submitted in a "let's try our luck" fashion. In this context, I sincerely hope that top AI/ML conferences will eventually follow ICLR's model and adopt fully open peer review. Why Open Review Matters For Reviewers: Knowing that reviews will be public adds a layer of accountability. It encourages more thoughtful, constructive, and responsible feedback. No more careless 1-scores or copy-pasted comments. For Authors: When reviews are public, authors will think twice before submitting undercooked ideas. Fear of negative reviews being visible online can act as a natural filter to avoid "lottery-style" submissions. For the Community: Public reviews help newcomers learn how to write better papers and better reviews. It also reduces the burden on reviewers caused by the "Fibonacci submission strategy" (endless revise-and-resubmits across top venues), and ultimately improves the quality of accepted papers. Final Thoughts Open review isn't a silver bullet, but in this era of exploding submission numbers, it’s a change worth pursuing. I hope to see more top-tier conferences move toward transparent and accountable reviewing, bringing the focus back to research quality, not just acceptance rates.
  • Discuss peer review challenges, submission experiences, decision fairness, reviewer quality, and biases at CVPR, ICCV, ECCV, VR, SIGGRAPH, EUROGRAPHICS, ICRA, IROS, RSS etc.

    2 2
    2 Topics
    2 Posts
    riverR
    CVPR 2025 has introduced new policies to address the issue of irresponsible reviewing. Under the new guidelines, reviewers who fail to submit timely and thorough reviews may have their own paper submissions desk-rejected at the discretion of the Program Chairs. This move aims to enhance the quality and fairness of the peer-review process. In a recent announcement, Area Chairs (ACs) of CVPR 2025 identified a number of highly irresponsible reviewers, those who either abandoned the review process entirely or submitted egregiously low-quality reviews, including some generated by large language models (LLMs). Following a thorough investigation, the Program Chairs (PCs) decided to desk-reject 19 papers authored by confirmed highly irresponsible reviewers, which would have been accepted otherwise, in accordance with the previously communicated CVPR 2025 policies. The affected authors have been informed of this decision. This action underscores CVPR's commitment to maintaining high standards in academic publishing. While some may view this collective accountability as controversial, many in the research community support these measures as essential for upholding the integrity of the conference. These policies reflect a broader trend in the academic community toward holding reviewers accountable for their contributions to the peer-review process. By ensuring that reviewers provide timely and constructive feedback, CVPR aims to foster a more equitable and rigorous academic environment.
  • Discuss peer review, submission experiences, and decision challenges for NLP research at ACL, EMNLP, NAACL, and COLING.

    4 4
    4 Topics
    4 Posts
    riverR
    The Verdict: ACL 2025 Review Scores Decoded This year’s Overall Assessment (OA) descriptions reveal a brutal hierarchy: 5.0 "Award-worthy (top 2.5%)" ️ 4.0 "ACL-worthy" 3.5 "Borderline Conference" 3.0 "Findings-tier" (Translation: "We’ll take it… but hide it in the appendix") 1.0 "Do not resubmit" (a.k.a. "Burn this and start over") Pro tip: A 3.5+ OA avg likely means main conference; 3.0+ scraps into Findings. Meta-reviewers now hold life-or-death power—one 4.0 can save a 3.0 from oblivion. Nightmare Fuel: The 6-Reviewer Special "Some papers got 6 reviewers—likely because emergency reviewers were drafted last-minute. Imagine rebutting 6 conflicting opinions… while praying the meta-reviewer actually reads your response." Rebuttal strategy: 2.0? "Give up." (Odds of salvation: ~0%) 2.5? "Worth a shot." 3.0? "Fight like hell." The ARR Meat Grinder Just Got Worse New changes to the ARR (Academic Rebuttal Rumble): 5 cycles/year now (April’s cycle vanished; June moved to May). EMNLP’s deadline looms closer — less time to pivot after ACL rejections. LLM stampede: *"8,000+ submissions per ARR cycle! "Back in the days, ACL had 3,000 submissions. No Findings, no ARR, no LLM hype-train. Now it’s just a content farm with peer review." How to Survive the Madness Got a 3.0? Pray your meta-reviewer is merciful. 🤬 Toxic review? File an "issue" (but expect crickets). ARR loophole: "Score low in Feb? Resubmit to May ARR and aim for EMNLP." The Big Picture: NLP’s Broken Incentives Reviewer fatigue: Emergency reviewers = rushed, clueless feedback. LLM monoculture: 90% of papers are "We scaled it bigger" or "Here’s a new benchmark for our 0.2% SOTA." Findings graveyard: Where "technically sound but unsexy" papers go to die. Final thought: "If you’re not gaming the system, the system is gaming you." Adapted from JOJO极智算法 (2025-03-28) Share your ACL 2025 horror stories below! Did you rebut or run?
  • SIGKDD, SIGMOD, ICDE, CIKM, WSDM, VLDB, ICDM and PODS

    1 1
    1 Topics
    1 Posts
    L
    I find a sharing post has a unique combination of author and reviewer for KDD 2024 conference, hence re-post the main narrative here. [image: kdd24-logo-small.jpeg] Initially, "I" (the author) approached KDD 2024 enthusiastically, both as an author and a reviewer (Area Chair). Yet, the journey quickly turned into a profound lesson about the state of peer review. Early on, as I started reviewing, I noticed significant discrepancies among reviewers. Out of the six papers I reviewed, five had received feedback from five to seven reviewers each, with opinions diverging drastically. I wondered how authors could adequately respond during rebuttal with such varying comments. My own submission had its challenges. The first round of reviews came in unevenly: Reviewer Scope Novelty Technical Quality Presentation Quality Reproducibility Confidence R1 4/4 3/7 3/7 2/3 2/3 4/4 R2 4/4 4/7 4/7 3/3 3/3 3/4 R3 4/4 4/7 6/7 3/3 3/3 3/4 R4 2/4 2/7 3/7 1/3 2/3 4/4 R5 4/4 6/7 5/7 3/3 3/3 4/4 R6 4/4 6/7 5/7 3/3 3/3 2/4 Notably, Reviewer 4 gave suspiciously low scores, prompting concerns of potential malicious intent. The rebuttal phase arrived, and I faced uncertainty. High-scoring reviewers promptly maintained their original scores, while those who gave average or low scores initially remained silent, creating anxiety. Especially troubling was the reviewer who had clearly scored maliciously low, later responding aggressively after being prompted, exacerbating my frustrations with a longer, harsher critique than the original review. While reviewing other submissions, I identified concerning trends in KDD’s review process. Papers remotely similar to existing works were swiftly labeled as "poor novelty", and those without extensive mathematical derivations were marked as having poor technical soundness. KDD, a Data Mining conference, seemed to be applying overly stringent Machine Learning standards, signaling deeper issues within its reviewing culture. After rebuttal concluded, none of the reviewers changed their scores. My hopes waned further upon discovering inconsistencies even among my own reviews. For example, two submissions I rated highly and were similarly praised by other reviewers, were undermined by just one harsh reviewer each, significantly impacting their fate. On May 17, 2024, came the heartbreaking final update: my paper was ultimately rejected by the Senior Area Chair for "lack of novelty," alongside all six anomaly detection papers I reviewed. Disappointed and exhausted, I now advise aspiring researchers to reconsider their paths: perhaps shifting towards more foundational Machine Learning, away from the turbulence of anomaly detection and traditional Data Mining fields. Reflecting on this journey, I remain hopeful. Someday, perhaps as an Area Chair myself, I'll better understand the motivations of certain "distinguished" reviewers. Until then, resilience remains crucial — but right now, it's time to take a break and perhaps shed a quiet tear.
  • ICSE, OSDI, SOSP, POPL, PLDI, FSE/ESEC, ISSTA, OOPSLA and ASE

    1 1
    1 Topics
    1 Posts
    S
    CCF Recommendation Conference Deadline Transferred from ccf-deadlines CCF-A October 13 - 16, 2025 Lotte Hotel World, Seoul, Republic of Korea ACM Symposium on Operating Systems Principles Deadline: Fri Apr 18th 2025 08:59:59 BST (2025-04-17 23:59:59 UTC-8) Website of SOSP CCF-B October 12-18, 2025 Singapore International Static Analysis Symposium Deadline: Mon May 5th 2025 12:59:59 BST (2025-05-04 23:59:59 UTC-12) Website of SAS CCF-A November 16 - November 20, 2025 Seoul, South Korea International Conference on Automated Software Engineering Deadline: Sat May 31st 2025 12:59:59 BST (2025-05-30 23:59:59 UTC-12) Website of ASE CCF-B October 12-18, 2025 Singapore International Conference on Function Programming Deadline: Fri Jun 13th 2025 12:59:59 BST (2025-06-12 23:59:59 UTC-12) Website of ICFP CCF-A April 12-18, 2026 Rio De Janeiro, Brazil International Conference on Software Engineering Deadline: Sat Jul 19th 2025 12:59:59 BST (2025-07-18 23:59:59 UTC-12) Website of ICSE
  • HCI, CSCW, UniComp, UIST, EuroVis and IEEE VIS

    0 0
    0 Topics
    0 Posts
    No new posts.
  • Anything around peer review for conferences such as SIGIR, WWW, ICMR, ICME, ECIR, ICASSP and ACM MM.

    1 1
    1 Topics
    1 Posts
    riverR
    Recently, someone surfaced (again) a method to query the decision status of a paper submission before the official release for ICME 2025. By sending requests to a specific API (https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)) endpoint in the CMT system, one can see the submission status via a StatusId field, where 1 means pending, 2 indicates acceptance, and 3 indicates rejection. This trick is not limited to ICME 2025. It appears that the same method can be applied to several other conferences, including: IJCAI, ICME, ICASSP, IJCNN and ICMR. However, it is important to emphasize that using this technique violates the fairness and integrity of the peer-review process. Exploiting such a loophole undermines the confidentiality and impartiality that are essential to academic evaluations. This is a potential breach of academic ethics, and an official fix is needed to prevent abuse. Below is a simplified Python script that demonstrates how this status monitoring might work. Warning: This code is provided solely for educational purposes to illustrate the vulnerability. It should not be used to bypass proper review procedures. import requests import time import smtplib from email.mime.text import MIMEText from email.header import Header import logging # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler("submission_monitor.log"), logging.StreamHandler() ] ) # List of submission URLs to monitor (replace 'Your_paper_id' accordingly) SUBMISSION_URLS = [ "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)", "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)" ] # Email configuration (replace with your actual details) EMAIL_CONFIG = { "smtp_server": "smtp.qq.com", "smtp_port": 587, "sender": "your_email@example.com", "password": "your_email_password", "receiver": "recipient@example.com" } def get_status(url): """ Check the submission status from the provided URL. Returns the status ID and a success flag. """ try: headers = { 'User-Agent': 'Mozilla/5.0', 'Accept': 'application/json', 'Referer': 'https://cmt3.research.microsoft.com/ICME2025/', # Insert your cookie here after logging in to CMT 'Cookie': 'your_full_cookie' } response = requests.get(url, headers=headers, timeout=30) if response.status_code == 200: data = response.json() status_id = data.get("StatusId") logging.info(f"URL: {url}, StatusId: {status_id}") return status_id, True else: logging.error(f"Failed request. Status code: {response.status_code} for URL: {url}") return None, False except Exception as e: logging.error(f"Error while checking status for URL: {url} - {e}") return None, False def send_notification(subject, message): """ Send an email notification with the provided subject and message. """ try: msg = MIMEText(message, 'plain', 'utf-8') msg['Subject'] = Header(subject, 'utf-8') msg['From'] = EMAIL_CONFIG["sender"] msg['To'] = EMAIL_CONFIG["receiver"] server = smtplib.SMTP(EMAIL_CONFIG["smtp_server"], EMAIL_CONFIG["smtp_port"]) server.starttls() server.login(EMAIL_CONFIG["sender"], EMAIL_CONFIG["password"]) server.sendmail(EMAIL_CONFIG["sender"], [EMAIL_CONFIG["receiver"]], msg.as_string()) server.quit() logging.info(f"Email sent successfully: {subject}") return True except Exception as e: logging.error(f"Failed to send email: {e}") return False def monitor_submissions(): """ Monitor the status of submissions continuously. """ notified = set() logging.info("Starting submission monitoring...") while True: for url in SUBMISSION_URLS: if url in notified: continue status, success = get_status(url) if success and status is not None and status != 1: email_subject = f"Submission Update: {url}" email_message = f"New StatusId: {status}" if send_notification(email_subject, email_message): notified.add(url) logging.info(f"Notification sent for URL: {url} with StatusId: {status}") if all(url in notified for url in SUBMISSION_URLS): logging.info("All submission statuses updated. Ending monitoring.") break time.sleep(60) # Wait for 60 seconds before checking again if __name__ == "__main__": monitor_submissions() Parting thoughts While the discovery of this loophole may seem like an ingenious workaround, it is fundamentally unethical and a clear violation of the fairness expected in academic peer review. Exploiting such vulnerabilities not only compromises the integrity of the review process but also undermines the trust in scholarly communications. We recommend the CMT system administrators to implement an official fix to close this gap. The academic community should prioritize fairness and the preservation of rigorous, unbiased review standards over any short-term gains that might come from exploiting such flaws.
  • Anonymously share data, results, or materials. Useful for rebuttals, blind submissions and more. Only unverified users can post (and edit or delete anytime afterwards).

    2 2
    2 Topics
    2 Posts
    M
    Table 1. Performance Comparison on Benchmark Datasets Method CIFAR-10 (Acc %) CIFAR-100 (Acc %) TinyImageNet (Acc %) Params (M) FLOPs (G) ResNet-18 94.5 76.3 64.1 11.2 1.8 ViT-Small 95.2 77.9 65.7 21.7 4.6 Ours (GraphFormer) 96.1 79.5 67.3 19.8 3.9 Table 2. Ablation Study on Temporal Encoding Method Variant CIFAR-10 (Acc %) TinyImageNet (Acc %) Ours (No Time Encoding) 95.4 66.1 Ours (Sinusoidal Only) 95.8 66.8 Ours (Learnable Time) 96.1 67.3 Table 3. Robustness to Input Noise on CIFAR-10 (% Accuracy) Method No Noise Gaussian (σ=0.1) Gaussian (σ=0.3) FGSM (ε=0.1) ResNet-18 94.5 91.2 83.7 78.9 ViT-Small 95.2 92.4 85.1 80.3 Ours 96.1 93.5 87.0 83.7 Table 4. Generalization to Out-of-Distribution (OOD) Data Method In-Domain (CIFAR-10) OOD (SVHN) OOD (CIFAR-10-C) ResNet-18 94.5 76.3 71.2 ViT-Small 95.2 78.1 73.0 Ours 96.1 80.5 75.3
Popular Tags