AAAI-26 Review Process Update: Scale, Integrity Measures, and Pathways to Sustainability
-
Email from AAAI-26 aaai-26-notifications@openreview.net
Dear AAAI Members and Affiliates;
Thank you for being a part of the AAAI community! AAAI particularly appreciates everyone who has been engaged with the AAAI-26 conference — as a contributing author, reviewer, organizer, or participant.
On behalf of AAAI and the AAAI-26 organizing team, we would like to share context on the astounding growth of AI research, describe the impact of this growth on this year’s conference paper review process, acknowledge the insights and growing pains we are experiencing, and outline steps we are taking to sustain quality and timely reviewing at this scale — both now and for the future.
Our commitment to quality and responsibility
As stewards of the AI review process, we assume the responsibility of ensuring that every submission to AAAI receives the care and consideration we expect for our own work. In addition to our organizational and volunteer roles, we are also authors. We realize how much time, sweat, and care is invested in each submission. We have experienced the thrills of seeing our best efforts accepted, published, and cited — and the disappointment that comes with a rejection. At the purest level, we share in the basic joys of science for creating new knowledge and the intellectual exchange of ideas with the broader community.
With this author-centered perspective in mind, our AAAI-26 team is doing our best to ensure quality and responsible reviewing in service to the unprecedented interest and scale for publishing at AAAI-26. We are writing this message to provide insight into some of the difficult decisions we have made (and will have to make) in this uncharted landscape, as well as to emphasize that we are applying as much empathy and discretion as can be managed at scale while treating all authors fairly and consistently.
Unprecedented scale
AAAI-26 received almost 29,000 submissions to the Main Technical Track. After removing papers that were not fully compliant with submission policies (e.g., missing PDFs, non-anonymized manuscripts, over-length papers, authors exceeding the submission cap, etc.), we still have roughly 23,000 papers under review – nearly twice the number of papers reviewed by AAAI-25!
The cost of reviewing does not scale linearly with submissions. At the scale of tens of thousands of papers, we are pushing the absolute limit of our reviewing systems with respect to storage, compute, bandwidth, workflow support, and the scarce resource of qualified reviewer time. These constraints necessitate stricter enforcement of review process policies, even if we would personally prefer the flexibility afforded in years past. The scaling needed for the review process will also introduce short delays in the reviewing schedule – we will continue to send any updates to the review schedule.
We are working to expand the size of the Main Track Technical Program. With AAAI-26 taking new steps to better embrace the global AI community, we are working to accommodate a record-breaking growth in program publications and conference participation.
What the numbers look like
AAAI-26 saw a much higher-than-expected number of submissions from China—almost 20,000 of the ~29,000 total—a welcome sign of global engagement.
The three largest research primary keywords for AAAI-26 submissions are Computer Vision (nearly 10,000), Machine Learning (nearly 8,000), and Natural Language Processing (over 4,000).
To meet this demand, we have recruited 28,000+ Program Committee members, Senior Program Committee members, and Area Chairs. The AAAI-26 Program Committee is nearly three times its size from AAAI-25. The Program Committee recruiting process included reciprocal reviewers, nominated by coauthors of submitted papers.
There are 75,000+ unique submitting authors. If only 1% have a question at any given time, that is ~750 emails—which can overwhelm a volunteer-run conference, even with increased support from AAAI’s small professional staff.
Safeguarding review integrity
We are actively investigating potential ethics issues in the review process. Confirmed violations will lead to consequences, and may include sanctions outside the current review timeline and far into the future. In addition to the Ethics Chairs for AAAI-26, AAAI has standing committees for publications and for ethics to pursue investigations and/or sanctions beyond the scope of this single conference.
Our AI-assisted reviewing experiment shows promising early results, including tools to detect and counteract collusion among reviewers. We will share a detailed report after decisions have been finalized.
Paper–reviewer matching uses state-of-the-art algorithms with robustness checks against bid manipulation, which means that unsanctioned mutual bidding has a minimal effect on the paper matching process. Bidding is only one of many factors used in the AAAI-26 paper matching process, and is outweighed by other factors including area of expertise, content of past publications, and geographic diversity.
Given the extreme volume of submissions and the fact that we do not have an equal number of reviewers and papers in each subarea, some reviewers are assessing papers that are adjacent to (rather than squarely within) their core area of expertise. We ask for your grace where this occurs—AAAI is a big tent, and we are working to balance fit, fairness, and timeliness.
Building for sustainability
We are committed to evolving AAAI’s processes so that AAAI-27 — and beyond — can scale sustainably. We are considering ideas for making the conference sustainable going forward, such as increasing year-to-year continuity, expanding professional staff support, and establishing a standing editorial board. We welcome your ideas and input — where you can contribute to shaping the future of the AAAI conference.
In closing
We are at an exciting moment for AI, and we are thrilled to see tremendous growth and global engagement. In the last few months, our team has already received more than five times the number of emails that AAAI-25 received over the entire year, surging up to 400 email requests per day. Thank you for your patience as we work through them. Because of these unique challenges, the review process is running slightly behind schedule, to accommodate our sustained focus on a fair, rigorous, and empathetic process.
We welcome community input. Please either complete this form or bring your suggestions to the AAAI-26 community meeting in Singapore. (Kindly understand that our first priority right now is keeping the review process on track and assembling an outstanding AAAI-26 program for January in Singapore.)
With appreciation for your support and understanding,
AAAI-26 Program Chairs
Odest Chadwicke Jenkins (University of Michigan, USA)
Matthew E. Taylor (University of Alberta, Canada)AAAI-26 Associate Program Chairs
Bo An (Nanyang Technological University, Singapore)
Joydeep Biswas (University of Texas at Austin, USA)
David J. Crandall (Indiana University, USA)
Matthew Lease (University of Texas at Austin, USA)
Kiri L. Wagstaff (Oregon State University, USA)AAAI-26 General Chair
Sven Koenig (University of California, Irvine, USA)AAAI-26 Associate General Chair
Eric Eaton (University of Pennsylvania, USA)AAAI Conference Committee Chair
Kevin Leyton-Brown (University of British Columbia, Canada)AAAI President
Stephen Smith (Carnegie Mellon University, USA) -
Is CSPaper (https://review.cspaper.org) collaborated with AAAI's new AI review initiative?