Skip to content

Peer Review in Computer Science: good, bad & broken

Discuss everything about peer review in computer science research: its successes, failures, and the challenges in between.

This category can be followed from the open social web via the handle cs-peer-review-general@cspaper.org:443

88 Topics 271 Posts

Subcategories


  • Discuss peer review challenges in AI/ML research — submission, review quality, bias, and decision appeals at ICLR, ICML, NeurIPS, AAAI, IJCAI, AISTATS and COLT.

    38 156
    38 Topics
    156 Posts
    rootR
    According to some community news, the submission ID has exceeded 30k!
  • Discuss peer review challenges, submission experiences, decision fairness, reviewer quality, and biases at CVPR, ICCV, ECCV, VR, SIGGRAPH, EUROGRAPHICS, ICRA, IROS, RSS etc.

    9 16
    9 Topics
    16 Posts
    rootR
    Shocking Cases, Reviewer Rants, Score Dramas, and the True Face of CV Top-tier Peer Review! “Just got a small heart attack reading the title.” — u/Intrepid-Essay-3283, Reddit [image: giphy.gif] Introduction: ICCV 2025 — Not Just Another Year ICCV 2025 might have broken submission records (11,239 papers! 🤯), but what really set this year apart was the open outpouring of review experiences, drama, and critique across communities like Zhihu and Reddit. If you think peer review is just technical feedback, think again. This year, it was a social experiment in bias, randomness, AI-detection accusations, and — sometimes — rare acts of fairness. Below, we dissect dozens of real cases reported by the community. Expect everything: miracle accepts, heartbreak rejections, reviewer bias, AC heroics, AI accusations, desk rejects, and score manipulation. Plus, we bring you the ultimate summary table — all real, all raw. The Hall of Fame: ICCV 2025 Real Review Cases Here’s a complete table of every community case reported above. Each row is a real story. Find your favorite drama! # Initial Score Final Score Rebuttal Effect Decision Reviewer/AC Notes / Notable Points Source/Comment 1 4/4/2 5/4/4 +1, +2 Accept AC sided with authors after strong rebuttal Reddit, ElPelana 2 5/4/4 6/5/4 +1, +1 Reject Meta-review agreed novelty, but blamed single baseline & "misleading" boldface Reddit, Sufficient_Ad_4885 3 5/4/4 5/4/4 None Reject Several strong scores, still rejected Reddit, kjunhot 4 5/5/3 6/5/4 +1, +2 Accept "Should be good" - optimism confirmed! Reddit, Friendly-Angle-5367 5 4/4/4 4/4/4 None Accept "Accept with scores of 4/4/4/4 lol" Reddit, ParticularWork8424 6 5/5/4 6/5/4 +1 Accept No info on spotlight/talk/poster Reddit, Friendly-Angle-5367 7 4/3/2 4/3/3 +1 Accept AC "saved" the paper! Reddit, megaton00 8 5/5/4 6/5/4 +1 Accept (same as #6, poster/talk unknown) Reddit, Virtual_Plum121 9 5/3/2 4/4/2 mixed Reject Rebuttal didn't save it, "incrementality" issue Reddit, realogog 10 5/4/3 - - Accept Community optimism for "5-4-3 is achievable" Reddit, felolorocher 11 4/4/2 4/4/3 +1 Accept AC fought for the paper, luck matters! Reddit, Few_Refrigerator8308 12 4/3/4 4/4/5 +1 Accept Lucky with AC Reddit, Ok-Internet-196 13 5/3/3 4/3/3 -1 (from 5 to 4) Reject Reviewer simply wrote "I read the rebuttals and updated my score." Reddit, chethankodase 14 5/4/1 6/6/1 +1/+2 Reject "The reviewer had a strong personal bias, but the ACs were not convinced" Reddit, ted91512 15 5/3/3 6/5/4 +1/+2 Accept "Accepted, happy ending" Reddit, ridingabuffalo58 16 6/5/4 6/6/4 +1 Accept "Accepted but not sure if poster/oral" Reddit, InstantBuffoonery 17 6/3/2 - None Reject "Strong accept signals" still not enough Reddit, impatiens-capensis 18 5/5/2 5/5/3 +1 Accept "Reject was against the principle of our work" Reddit, SantaSoul 19 6/4/4 6/6/4 +2 Accept Community support for strong scores Reddit, curious_mortal 20 4/4/2 6/4/2 +2 Accept AC considered report about reviewer bias Reddit, DuranRafid 21 3/4/6 3/4/6 None Reject BR reviewer didn't submit final, AC rejected Reddit, Fluff269 22 355 555 +2 Accept "Any chance for oral?" Reddit, Beginning-Youth-6369 23 5/3/2 - - TBD "Had a good rebuttal, let's see!" Reddit, temporal_guy 24 4/3/4 - - TBD "Waiting for good results!" Reddit, Ok-Internet-196 25 5/5/4 5/5/4 None Accept "555 we fn did it boys" Reddit, lifex_ 26 633 554 - Accept "Here we go Hawaii♡" Reddit, DriveOdd5983 27 554 555 +1 Accept "Many thanks to AC" Reddit, GuessAIDoesTheTrick 28 345 545 +2 Accept "My first Accept!" Reddit, Fantastic_Bedroom170 29 4/4/2 232 -2, -2 Reject "Reviewers praised the paper, but still rejected" Reddit, upthread 30 5/4/4 5/4/4 None Reject "Another 5/4/4 reject here!" Reddit, kjunhot 31 432 432 None TBD "432 with hope" Zhihu, 泡泡鱼 32 444 444 None Accept "3 borderline accepts, got in!" Zhihu, 小月 33 553 555 +2 Accept "5-score reviewer roasted the 3-score reviewer" Zhihu, Ealice 34 554 555 +1 Accept "Highlight downgraded to poster, but happy" Zhihu, Frank 35 135 245 +1/+2 Reject "Met a 'bad guy' reviewer" Zhihu, Frank 36 235 445 +2 Accept "Congrats co-authors!" Zhihu, Frank 37 432 432 None Accept "AC appreciated explanation, saved the paper" Zhihu, Feng Qiao 38 442 543 +1/+1 Accept "After all, got in!" Zhihu, 结弦 39 441 441 None TBD "One reviewer 'writing randomly'" Zhihu, ppphhhttt 40 4/4/3/2 - - TBD "Asked to use more datasets for generalization" Zhihu, 随机 41 446 (443) - - TBD "Everyone changed scores last two days" Zhihu, 877129391241 42 553 553 None Accept "Thanks AC for acceptance" Zhihu, Ealice 43 4/4/3/2 - - Accept "First-time submission, fair attack points" Zhihu, 张读白 44 4/4/4 4/4/4 None Accept "Confident, hoping for luck" Zhihu, hellobug 45 5541 - - TBD "Accused of copying concurrent work" Zhihu, 凪·云抹烟霞 46 554 555 +1 Accept "Poster, but AC downgraded highlight" Zhihu, Frank 47 6/3/2 - None Reject High initial, still rejected Reddit, impatiens-capensis 48 432 432 None Accept "Average final 4, some hope" Zhihu, 泡泡鱼 49 563 564 +1 Accept "Grateful to AC!" Zhihu, 夏影 50 6/5/4 6/6/4 +1 Accept "Accepted, not sure if poster or oral" Reddit, InstantBuffoonery NOTE: This is NOT an exhaustive list of all ICCV 2025 papers, but every real individual case reported in the Zhihu and Reddit community discussions included above. Many entries were “update pending” at posting — when the author didn’t share the final result, marked as TBD. Many papers changed hands between accept/reject on details like one reviewer not updating, AC/Meta reviewer overrides, “bad guy”/mean reviewers, and luck with batch cutoff. 🧠 ICCV 2025 Review Insights: What Did We Learn? 1. Luck Matters — Sometimes More Than Merit Multiple papers with 5/5/3 or even 6/5/4 were rejected. Others with one weak reject (2) got in — sometimes only because the AC “fought for it.” "Getting lucky with the reviewers is almost as important as the quality of the paper itself." (Reddit) 2. Reviewer Quality Is All Over the Place Dozens reported short, generic, or careless reviews — sometimes 1-2 lines with major negative impact. Multiple people accused reviewers of being AI-generated (GPT/Claude/etc.) — several ran AI detectors and reported >90% “AI-written.” Desk rejects were sometimes triggered by reviewer irresponsibility (ICCV officially desk-rejected 29 papers for "irresponsible" reviewers). 3. Rebuttal Can Save You… Sometimes Many cases where good rebuttals led to score increases and acceptance. But also numerous stories where reviewers didn’t update, or even lowered scores post-rebuttal without clear reason. 4. Meta-Reviewers & ACs Wield Real Power Several stories where ACs overruled reviewers (for both acceptance and rejection). Meta-reviewer “mistakes” (e.g., recommend accept but click reject) — some authors appealed and got the result changed. 5. System Flaws and Community Frustrations Complaints about the “review lottery”, irresponsible/underqualified reviewers, ACs ignoring rebuttal, and unfixable errors. Many hope for peer review reform: more double-blind accountability, reviewer rating, and even rewards for good reviewing (see this arXiv paper proposing reform). Community Quotes & Highlights "Now I believe in luck, not just science." — Anonymous "Desk reject just before notification, it's a heartbreaker." — 877129391241, Zhihu "I got 555, we did it boys." — lifex, Reddit "Three ACs gave Accept, but it was still rejected — I have no words." — 寄寄子, Zhihu "Training loss increases inference time — is this GPT reviewing?" — Knight, Zhihu "Meta-review: Accept. Final Decision: Reject. Reached out, they fixed it." — fall22_cs_throwaway, Reddit Final Thoughts: Is ICCV Peer Review Broken? ICCV 2025 gave us a microcosm of everything good and bad about large-scale peer review: scientific excellence, reviewer burnout, human bias, reviewer heroism, and plenty of randomness. Takeaways: Prepare your best work, but steel yourself for randomness. Test early on https://review.cspaper.org before and after submission to help build reasonable expectation Craft a strong, detailed rebuttal — sometimes it works miracles. If you sense real injustice, appeal or contact your AC, but don’t count on it. Above all: Don’t take a single decision as a final judgment of your science, your skill, or your future. Join the Conversation! What was YOUR ICCV 2025 review experience? Did you spot AI-generated reviews? Did a miracle rebuttal save your work? Is the peer review crisis fixable, or are we doomed to reviewer roulette forever? “Always hoping for the best! But worse case scenario, one can go for a Workshop with a Proceedings Track!” — Reddit [image: peerreview-nickkim.jpg] Let’s keep pushing for better science — and a better system. If you find this article helpful, insightful, or just painfully relatable, upvote and share with your fellow researchers. The struggle is real, and you are not alone!
  • Discuss peer review, submission experiences, and decision challenges for NLP research at ACL, EMNLP, NAACL, and COLING.

    11 25
    11 Topics
    25 Posts
    JoanneJ
    [image: 1753905014241-acl-2025-rewards.png] Global NLP Community Converges on Vienna for a Record-Breaking 63rd Annual Meeting Hardware breakthroughs, societal guardrails & time-tested classics. Below you’ll find expanded snapshots of every major award announced in Vienna, enriched with quick-read insights and primary-source links. Spotlight on the Four Best Papers Theme Key Idea One-Line Impact Paper & Lead Labs Efficiency Native Sparse Attention (NSA) splits keys/values into Compress · Select · Slide branches with CUDA-level kernels. Long-context LLMs run at full-attention quality but >2× faster on A100s. DeepSeek × PKU × UW — Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention Safety / Alignment Elasticity—pre-training inertia that pulls finetuned weights back to the original distribution. Deep alignment may require pre-training-scale compute, not “cheap” post-training tweaks. Peking U. (Yaodong Yang) — Language Models Resist Alignment: Evidence from Data Compression Fairness “Difference awareness” benchmark (8 scenarios · 16 k Qs) tests when group-specific treatment is desirable. Shows “color-blind” debiasing can backfire; fairness is multidimensional. Stanford × Cornell Tech — Fairness through Difference Awareness Human-like Reasoning LLMs sample responses via descriptive (statistical) and prescriptive (normative) heuristics. Explains subtle biases in health, econ outputs; informs policy audits. CISPA × Microsoft × TCS — A Theory of Response Sampling in LLMs Why They Matter Cloud-cost pressure: NSA-style sparsity will be irresistible to any org paying by GPU-hour. Regulatory urgency: Elasticity + sampling bias suggest upcoming EU/US safety rules must probe training provenance, not just inference behavior. Benchmark reboot: Difference-aware fairness raises the bar for North-American policy datasets. Beyond the Best: Key Awards [image: 1753905669075-6151952e-7334-4ff8-959b-a3d1a868af70-image.png] Award Winner(s) Take-Away Best Social-Impact Papers 2 papers Generative AI plagiarism detection · Global hate-speech “day-in-life” dataset. Best Resource Papers 3 papers Multilingual synthetic speech (IndicSynth), canine phonetic alphabet, 1,000 + LM “cartography.” Best Theme Papers 3 papers MaCP micro-finetuning (few KiB params), Meta-rater multidim data curation, SubLIME 80 – 99 % cheaper eval. Outstanding Papers (26) Zipf law reformulations → Token recycling Shows breadth: theory, safety, hardware, evaluation, and even dog phonetics. Best Demo OLMoTrace (AI2) Real-time trace-back of any LLM output to trillions of training tokens—auditability meets UX. TACL Best Weakly-supervised CCG instruction following · Short-story summarization with authors in-the-loop Rethinks grounding & human alignment at smaller scales. Time-Test-of-Time (25 y / 10 y) SRL automatic labeling · Global & local NMT attention Underlines longevity of semantic frames & dot-product attention. Lifetime Achievement Kathy McKeown (Columbia) 43 y pioneering NLG, summarization, and mentoring. Distinguished Service Julia B. Hirschberg (Columbia) 35 y of ACL & Computational Linguistics leadership. 1 · Best Social-Impact Papers Paper Authors & Affiliations Why It Matters All That Glitters Is Not Novel: Plagiarism in AI-Generated Research Tarun Gupta, Danish Pruthi (CMU) 24 % of 50 “autonomously generated” drafts were near-copy paraphrases that evade detectors—spotlighting plagiarism forensics in autonomous science. HateDay: A Day-Long, Multilingual Twitter Snapshot Manuel Tonneau et al. (Oxford Internet Institute) Eight-language dataset shows real-world hate-speech prevalence is far higher—and model accuracy far lower—outside English. 2 · Best Resource Papers Dataset / Tool Highlights IndicSynth 2.8 k h of synthetic speech covering 13 low-resource Indian languages; unlocks TTS + ASR research for Bhojpuri, Maithili, Konkani, and more. Canine Phonetic Alphabet Algorithmic inventory of dog phonemes from 9 k recordings—opens the door to cross-species speech NLP. LM Cartography (Log-Likelihood Vector) Embeds 1,000 + language models in a shared vector space; Euclidean distance ≈ KL-divergence—enables taxonomy & drift analysis at linear cost. 3 · Best Theme Papers Paper One-Sentence Take-Away MaCP: Minimal yet Mighty Adaptation via Hierarchical Cosine Projection JPEG-style cosine pruning lets you fine-tune a 7 B-param LLM with <256 kB of learnable weights—SOTA on NLU + multimodal tasks. Meta-rater: Multi-Dimensional Data Selection Blends 25 quality metrics into four axes—Professionalism, Readability, Reasoning, Cleanliness—cutting pre-train tokens 50 % with +3 % downstream gains. SubLIME: Rank-Aware Subset Evaluation Predicts Spearman ρ to keep ≤20 % of any benchmark while preserving leaderboard order (ρ > 0.9); saves up to 99 % eval FLOPs. [image: 1753907239444-0fdee6fc-d29f-432c-b398-33f279168ef7-image.png] 4 · Outstanding Papers (26) A New Formulation of Zipf's Meaning-Frequency Law through Contextual Diversity. All That Glitters is Not Novel: Plagiarism in Al Generated Research. Between Circuits and Chomsky: Pre-pretraining on Formal Languages Imparts Linguistic Biases. Beyond N-Grams: Rethinking Evaluation Metrics and Strategies for Multilingual Abstractive Summarization Bridging the Language Gaps in Large Language Modeis with inference-Time Cross-Lingual Intervention. Byte Latent Transformer: Patches Scale Better Than Tokens. Capability Salience Vector: Fine-grained Alignment of Loss and Capabilities for Downstream Task Scaling Law. From Real to Synthetic: Synthesizing Millions of Diversified and Complicated User Instructions with Attributed Grounding. HALoGEN: Fantastic tiM Hallucinations and Where to Find Them, HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter. IoT: Embedding Standardization Method Towards Zero Modality Gap. IndicSynth: A Large-Scale Multilingual Synthetic Speech Dataset for Low-Resource Indian Languages. LaTIM: Measuring Latent Token-to-Token Interactions in Mamba Models. Llama See, Llama Do: A Mechanistic Perspective on Contextual Entrainment and Distraction in LLMs. LLMs know their vulnerabilities: Uncover Safety Gaps through Natural Distribution Shifts. Mapping 1,0o0+ Language Models via the Log-Likelihood Vector. MiniLongBench: The Low-cost Long Context Understanding Benchmark for Large Language Models. PARME: Parallel Corpora for Low-Resourced Middle Eastern Languages. Past Meets Present: Creating Historical Analogy with Large Language Models. Pre3: Enabling Deterministic Pushdown Automata for Faster Structured LLM Generation. Rethinking the Role of Prompting Strategies in LLM Test-Time Scaling: A Perspective of Probability Theory. Revisiting Compositional Generalization Capability of Large Language Models Considering Instruction Following Ability. Toward Automatic Discovery of a Canine Phonetic Alphabet. Towards the Law of Capacity Gap in Distilling Language Models. Tuning Trash into Treasure: Accelerating Inference of Large Language Models with Token Recycling. Typology-Guided Adaptation for African NLP. 5 · Best Demo Demo What It Does OLMoTrace (AI2) Real-time trace-back of any model output to its multi-trillion-token training corpus—auditing & copyright checks in seconds. 6 · TACL Best Papers Paper Core Insight Weakly Supervised Learning of Semantic Parsers for Mapping Instructions to Actions Grounded CCG parser learns from trajectory success signals—birth of modern instruction-following. Reading Subtext: Short-Story Summarization with Writers-in-the-Loop Human authors show GPT-4 & Claude miss implicit motives & timeline jumps >50 % of the time—pushing for creative-content benchmarks. 7 · Time-Test-of-Time Awards Span Classic Contribution Lasting Impact 25-Year (2000) Automatic Labeling of Semantic Roles (Gildea & Jurafsky) Kick-started SRL; citations > 2.6 k and still foundational for event extraction. 10-Year (2015) Effective Approaches to Attention-Based NMT (Luong et al.) Introduced global vs local attention & dot-product scoring—precursor to today’s Q/K/V transformers. [image: 1753907079661-2e3b92fc-b38b-44da-a36d-4bd33bade9d4-image.png] [image: 1753906938021-2a90d3f9-be17-4994-806b-fc20b4c90de4-image.png] [image: 1753906984660-db498526-cf76-45e5-813d-100063d23c1b-image.png] 8 · Lifetime & Service Honors Award Laureate Legacy Lifetime Achievement Kathleen R. McKeown 43 yrs pioneering text generation & multi-doc summarization; founding director, Columbia DSI; mentor to two generations of NLP leaders. Distinguished Service Julia B. Hirschberg 35 yrs steering ACL policy & Computational Linguistics; trail-blazer in prosody & speech dialogue systems. [image: 1753906883351-2895b4a6-50ea-4ff5-a527-d043b9a610c3-image.png] [image: 1753906905786-a8d0fb98-ab1e-4723-a96c-78fa2fe67cfe-image.png] What Global Practitioners Should Watch The cost curve is bending: Sparse, hardware-aware designs (NSA, KV-eviction, token recycling) will dictate which labs can still train frontier models as GPU prices stay volatile. Alignment ≠ Fine-tuning: “Elasticity” reframes safety from a patching problem to a co-training problem—expect a rise in alignment-during-pre-train methods and joint governance. Fairness travels badly: Benchmarks rooted in US civil-rights law clash with Asian data realities. Multiregional “difference aware” suites could become the next multilingual GLUE. Provenance is product-ready: OLMoTrace & trace-back demos indicate that open-source stacks will soon let enterprises prove where every token came from—key for EU AI Act compliance. Author demographics matter: With 51 % first authors from China, conference culture, tutorial topics, and even review guidelines are drifting East. Western labs must collaborate, not compete on size alone. TL;DR ACL 2025 broke every record—but more importantly, it set the agenda: build LLMs that are faster (DeepSeek), fairer (Stanford/Cornell), safer (Peking U.), and more human-aware (CISPA). The future of NLP will be judged not just by scale, but by how efficiently and responsibly that scale is used.
  • SIGKDD, SIGMOD, ICDE, CIKM, WSDM, VLDB, ICDM and PODS

    4 29
    4 Topics
    29 Posts
    JoanneJ
    [image: 1753375505199-866c4b66-8902-4e99-8065-60d1806309a6-vldb2026.png] The International Conference on Very Large Data Bases (VLDB) is a premier annual forum for data management and scalable data science research, bringing together academics, industry engineers, practitioners and users. VLDB 2026 will feature research talks, keynotes, panels, tutorials, demonstrations, industrial sessions and workshops that span the full spectrum of information management topics, from system architecture and theory to large scale experimentation and demanding real world applications. Key areas of interest for its companion journal PVLDB include, but are not limited to, data mining and analytics, data privacy and security, database engines, database performance and manageability, distributed database systems, graph and network data, information integration and data quality, languages, machine learning / AI and databases, novel database architectures, provenance and workflows, specialized and domain-specific data management, text and semi-structured data, and user interfaces. The 52nd International Conference on Very Large Data Bases (VLDB 2026) runs 31 Aug – 4 Sep 2026 in Boston, MA, USA. Peer review is handled via Microsoft’s Conference Management Toolkit (CMT). The submission channel will be PVLDB Vol 19 (rolling research track) with General Chairs Angela Bonifati (Lyon 1 University & IUF, France) and Mirek Riedewald (Northeastern University, USA) Rolling submission calendar (PVLDB Vol 19) Phase Recurring date* Notes Submissions open 20 th of the previous month CMT site opens Paper deadline 1 st of each month (Apr 1 2025 → Mar 1 2026) 17:00 PT hard cut-off Notification / initial reviews 15 th of following month Accept / Major Revision / Reject Revision due ≤ 2.5 months later (1 st of third month) Single-round revision Camera-ready instructions 5 th of the month after acceptance Sent to accepted papers Final cut-off for VLDB 2026 1 Jun 2026 revision deadline Later acceptances roll to VLDB 2027 *See the official CFP for the full calendar. Acceptance statistics (research track) Year Submissions Accepted Rate 2022 976 265 27.15 % 2021 882 212 24 % 2020 827 207 25.03 % 2019 677 128 18.9 % 2013 559 127 22.7 % 2012 659 134 20.3 % 2011 553 100 18.1 % Acceptance has ranged between ~18 % and ~27 % in the PVLDB era. Rolling monthly deadlines have increased submission volume while maintaining selectivity. Emerging research themes (2025 – 2026) Vector databases & retrieval-augmented LMs Hardware / software co-design for LLM workloads Scalable graph management & analytics Multimodal querying & knowledge-rich search with LLMs Submission checklist Use the official PVLDB Vol 19 LaTeX/Word template. Declare all conflicts of interest in CMT. Provide an artifact URL for reproducibility. Submit early (before Jan 2026) to leave revision headroom. Ensure at least one author registers to present in Boston (or via the hybrid option). Key links Main site: https://www.vldb.org/2026/ Research-track CFP & important dates: https://www.vldb.org/2026/call-for-research-track.html PVLDB Vol 19 submission guidelines: https://www.vldb.org/pvldb/volumes/19/submission/ Draft early, align your work with the vector and LLM data system wave, and shine in Boston!
  • ICSE, OSDI, SOSP, POPL, PLDI, FSE/ESEC, ISSTA, OOPSLA and ASE

    1 2
    1 Topics
    2 Posts
    rootR
    It seems CCF is revising the list again: https://www.ccf.org.cn/Academic_Evaluation/By_category/2025-05-09/841985.shtml
  • HCI, CSCW, UniComp, UIST, EuroVis and IEEE VIS

    2 3
    2 Topics
    3 Posts
    JoanneJ
    [image: 1750758497155-fa715fd6-ed5a-44be-8c8d-84f1645fac47-image.png] CHI remains the flagship venue in the HCI field. It draws researchers from diverse disciplines, consistently puts humans at the center, and amplifies research impact through high quality papers, compelling keynotes, and extensive doctoral consortia. Yet CHI isn’t the entirety of the HCI landscape. It’s just the heart of a much broader ecosystem. Here’s a quick-look field guide Six flagship international HCI conferences Acronym What makes it shine Ideal authors Home page Photo UIST Hardware & novel interface tech; demo heavy culture System / device researchers https://uist.acm.org/2025/ [image: 1750757345992-d6b2b397-f753-40fd-b2b7-2410ed6556b9-image.png] SIGGRAPH Graphics core plus dazzling VR/AR & 3-D interaction showcases Graphics, visual interaction & art-tech hybrids https://www.siggraph.org/ [image: 1750757560460-6657b0b8-06d3-4c27-bc03-6f449a03b7c2-image.png] MobileHCI Interaction in mobile, wearable & ubiquitous contexts Ubicomp oriented, real world applications https://mobilehci.acm.org/2024/ [image: 1750757628685-22f47458-89b5-4f9c-8718-ee89249c1e49-image.png] CSCW Collaboration, remote work & social media at scale Socio-technical & social computing teams https://cscw.acm.org/2025/ [image: 1750757750339-ea17f345-83b9-47f3-af41-6623bdf45eab-image.png] DIS Creative, cultural & critical interaction design UX, speculative & experience driven scholars https://dis.acm.org/2025/ [image: 1750757796645-b1212781-047f-4afc-89a4-e07691e25225-image.png] CHI Broadest scope, human centred ethos, highest brand value Any HCI sub field https://chi2026.acm.org/ [image: 1750757827999-a2b6e621-cbbb-428c-929c-97d243165d19-image.png] Four high-impact HCI journals Journal Focus Good for Home page ACM TOCHI Major theoretical / methodological breakthroughs Large, mature studies needing depth https://dl.acm.org/journal/tochi IJHCS <br>(International Journal of Human-Computer Studies) Cognition → innovation → UX Theory blended with applications https://www.sciencedirect.com/journal/international-journal-of-human-computer-studies CHB <br>(Computers in Human Behavior) Psychological & behavioural angles on HCI Quant-heavy user studies & surveys https://www.sciencedirect.com/journal/computers-in-human-behavior IJHCI <br>(International Journal of Human-Computer Interaction) Cognitive, creative, health-related themes Breadth from conceptual to applied work https://www.tandfonline.com/journals/hihc20 ️ Conference vs. journal: choosing the right vehicle Conferences prize speed: decision to publication can be mere months, papers are concise, and novelty is king. Journals prize depth: multiple revision rounds, no strict length cap, and a focus on long term influence. When a conference is smarter 🧪 Fresh prototypes or phenomena that need rapid peer feedback Face-to-face networking with collaborators and recruiters ️ Time-sensitive results where a decision within months matters 🧭 When a journal pays off Data and theory fully polished and deserving full exposition Citation slow burn for tenure or promotion dossiers Ready for iterative reviews to reach an authoritative version Take-away If CHI is the main stage , UIST, SIGGRAPH, MobileHCI, CSCW & DIS are the satellite arenas ️; TOCHI, IJHCS, CHB & IJHCI serve as deep archives . Match your study’s maturity, urgency and career goals to the venue, follow the links above, and—once you’ve dropped in those shiny images—let the best audience discover your work. Happy submitting!
  • Anything around peer review for conferences such as SIGIR, WWW, ICMR, ICME, ECIR, ICASSP and ACM MM.

    1 1
    1 Topics
    1 Posts
    riverR
    Recently, someone surfaced (again) a method to query the decision status of a paper submission before the official release for ICME 2025. By sending requests to a specific API (https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)) endpoint in the CMT system, one can see the submission status via a StatusId field, where 1 means pending, 2 indicates acceptance, and 3 indicates rejection. This trick is not limited to ICME 2025. It appears that the same method can be applied to several other conferences, including: IJCAI, ICME, ICASSP, IJCNN and ICMR. However, it is important to emphasize that using this technique violates the fairness and integrity of the peer-review process. Exploiting such a loophole undermines the confidentiality and impartiality that are essential to academic evaluations. This is a potential breach of academic ethics, and an official fix is needed to prevent abuse. Below is a simplified Python script that demonstrates how this status monitoring might work. Warning: This code is provided solely for educational purposes to illustrate the vulnerability. It should not be used to bypass proper review procedures. import requests import time import smtplib from email.mime.text import MIMEText from email.header import Header import logging # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s', handlers=[ logging.FileHandler("submission_monitor.log"), logging.StreamHandler() ] ) # List of submission URLs to monitor (replace 'Your_paper_id' accordingly) SUBMISSION_URLS = [ "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)", "https://cmt3.research.microsoft.com/api/odata/ICME2025/Submissions(Your_paper_id)" ] # Email configuration (replace with your actual details) EMAIL_CONFIG = { "smtp_server": "smtp.qq.com", "smtp_port": 587, "sender": "your_email@example.com", "password": "your_email_password", "receiver": "recipient@example.com" } def get_status(url): """ Check the submission status from the provided URL. Returns the status ID and a success flag. """ try: headers = { 'User-Agent': 'Mozilla/5.0', 'Accept': 'application/json', 'Referer': 'https://cmt3.research.microsoft.com/ICME2025/', # Insert your cookie here after logging in to CMT 'Cookie': 'your_full_cookie' } response = requests.get(url, headers=headers, timeout=30) if response.status_code == 200: data = response.json() status_id = data.get("StatusId") logging.info(f"URL: {url}, StatusId: {status_id}") return status_id, True else: logging.error(f"Failed request. Status code: {response.status_code} for URL: {url}") return None, False except Exception as e: logging.error(f"Error while checking status for URL: {url} - {e}") return None, False def send_notification(subject, message): """ Send an email notification with the provided subject and message. """ try: msg = MIMEText(message, 'plain', 'utf-8') msg['Subject'] = Header(subject, 'utf-8') msg['From'] = EMAIL_CONFIG["sender"] msg['To'] = EMAIL_CONFIG["receiver"] server = smtplib.SMTP(EMAIL_CONFIG["smtp_server"], EMAIL_CONFIG["smtp_port"]) server.starttls() server.login(EMAIL_CONFIG["sender"], EMAIL_CONFIG["password"]) server.sendmail(EMAIL_CONFIG["sender"], [EMAIL_CONFIG["receiver"]], msg.as_string()) server.quit() logging.info(f"Email sent successfully: {subject}") return True except Exception as e: logging.error(f"Failed to send email: {e}") return False def monitor_submissions(): """ Monitor the status of submissions continuously. """ notified = set() logging.info("Starting submission monitoring...") while True: for url in SUBMISSION_URLS: if url in notified: continue status, success = get_status(url) if success and status is not None and status != 1: email_subject = f"Submission Update: {url}" email_message = f"New StatusId: {status}" if send_notification(email_subject, email_message): notified.add(url) logging.info(f"Notification sent for URL: {url} with StatusId: {status}") if all(url in notified for url in SUBMISSION_URLS): logging.info("All submission statuses updated. Ending monitoring.") break time.sleep(60) # Wait for 60 seconds before checking again if __name__ == "__main__": monitor_submissions() Parting thoughts While the discovery of this loophole may seem like an ingenious workaround, it is fundamentally unethical and a clear violation of the fairness expected in academic peer review. Exploiting such vulnerabilities not only compromises the integrity of the review process but also undermines the trust in scholarly communications. We recommend the CMT system administrators to implement an official fix to close this gap. The academic community should prioritize fairness and the preservation of rigorous, unbiased review standards over any short-term gains that might come from exploiting such flaws.
  • Anything around peer review for conferences such as ISCA, FAST, ASPLOS, EuroSys, HPCA, SIGMETRICS, FPGA and MICRO.

    1 2
    1 Topics
    2 Posts
    rootR
    R.I.P. USENIX ATC ...