π CSPaper Review Update (2025-08-08): WACV, 2x missing literature & expanded correctness check
-
Dear CSPaper Review Users,
We are happy to announce the latest enhancements to CSPaper Review (https://review.cspaper.org/)!
What's New
-
WACV 2026 Main Track Review Now Available
The reviewing interface and AI-assisted tools are now live for the WACV 2026 main conference track. You can submit reviews, run consistency checks, and explore automated summaries as before. -
Improved Related Work Retrieval
We have expanded the search space for identifying missing related work. The tool now returns 2Γ more candidate papers potentially missing from a submissionβs bibliography, improving coverage and citation integrity. -
Expanded Quality and Correctness Checks
Based on community feedback and real-case evaluations, we've significantly strengthened our automated paper quality checks. The system now rigorously detects serious errors like:- Invalid or overstated statistical claims, including unverifiable identifiability conditions (e.g., necessity/sufficiency misstatements).
- Experimental methodology flaws such as hyperparameter tuning on test sets or improper evaluation protocols.
- Misleading claims of novelty or significance unsupported by theory or empirical evidence.
- Incorrect convexity or optimization claims lacking justification.
- Paper is not coherent or seems like drafted by LLMs with a low overall quality.
These additions improve early detection of desk-reject-worthy issues, supporting reviewers in catching critical errors and improving overall review quality.
Pre-Announcements:
1. Sign-In Requirement Before Review Submission
To ensure backend stability and facilitate secure review deletion upon request, we will soon enforce mandatory sign-in before submitting a review. This change will take effect next week, and we'll post a follow-up update here once it's live.
2. PDF page limitation
Currently, we automatically truncate PDF files to the first 15 pages if the uploaded document exceeds this length. This behavior often results in incomplete review reports, which can negatively affect the perceived quality of our evaluations. To address this, we will enforce a hard limit on the maximum number of PDF pages that can be processed.
We acknowledge that this limitation may feel restrictive, so we are considering increasing the limit to a more generous threshold, such as 20 pages or more. The final decision will be informed by data analysis to strike a balance between system performance, cost, and user experience.
GPT-5 Benchmarking Underway
With OpenAIβs release of GPT-5, weβve initiated benchmarking to evaluate its suitability as a CSPaper Review agent. Our benchmarks will help determine whether to enable GPT-5 for automated reviewing and summarization. We will share detailed findings in the coming weeks.
Thank you for using CSPaper Review!
β The CSPaper Team
-