OpenPrint
20260212.0002v1Position

Adopt Machine-Human Collaboration Peer-Review through Computational Research Assessment

Lele Cao, Lei You, Kai Xie, Weiping Ding, Yong Du, Sven Salmonsson, Yumin Zhou, Vilhelm von Ehrenheim

Abstract

Scientific output is outgrowing human review capacity, while AI is already used to draft papers. Authors scale with machines; reviewers largely do not. This asymmetry turns quality control into a bottleneck and increases the risk of both false rejection of high-novelty work and acceptance of flawed results. We propose Computational Research Assessment (CRA) as a discipline-level, method-agnostic agenda for machine-human collaboration in peer review. CRA rests on three principles: treat disagreement as a signal that triggers escalation instead of averaging; make every critique evidence-linked, reproducible, and contestable; and build a community immune system with open corpora, benchmarks, and red-team tests to surface gaming and bias. We map these principles to a co-review engine, a community commons, and theoretical foundations, and we outline near-term pilots and falsifiable commitments, informed by an emerging production-grade pre-review system deployed in the wild.

Keywords

Computational research assessmentmachine-human collaborationAI-assisted peer reviewco-review enginedisagreement escalationevidence-linked critiquereproducibilityadversarial robustness

Citation

@article{Cao2026Adopt,
  title={Adopt Machine-Human Collaboration Peer-Review through Computational Research Assessment},
  author={Lele Cao and Lei You and Kai Xie and Weiping Ding and Yong Du and Sven Salmonsson and Yumin Zhou and Vilhelm von Ehrenheim},
  year={2026},
  url={https://cspaper.org/openprint/20260212.0002v1},
  journal={OpenPrint:20260212.0002v1}
}

Version History

VersionSubmitted On
v1CurrentFebruary 12, 2026
See full paper (PDF)