OpenPrint:
A Verification-First Path for Sharing Your Research
CSPaper
A part of Scholar7 AB · April 2026
TL;DR
OpenPrint is a verification-first workflow for sharing research in public before journal or conference review.
It helps authors run structured checks on paper quality, references, and correctness so readers can evaluate work with more context.
The goal is not to replace journals or conferences, but to make early research sharing more transparent, reviewable, and trustworthy in an age of AI-accelerated paper generation.
For decades, the bottleneck in research was producing new findings.
That is no longer true.
Today, the harder problem is deciding what deserves trust, attention, and further work.
This shift is easy to miss if we keep using old language. We often talk as if the central problem in research communication were still distribution: how to upload a PDF, how to host a manuscript, how to make work discoverable. Those questions still matter, but they no longer describe the center of gravity. The real pressure point has moved upstream, into evaluation.
Researchers feel this every day. Authors wait months for decisions. Reviewers are overwhelmed. Publishers and platforms are under pressure to process ever-growing submission volume without losing quality. Valuable work gets delayed for issues that could have been detected early. Weak work slips through because the filtering layer is overloaded. The result is not merely inconvenience. It is a structural problem in how scientific attention gets allocated.
At the same time, a second change is reshaping the landscape even more dramatically: the rise of automated research systems.
What was recently a speculative idea is becoming operational reality. Systems such as EvoScientist, AutoResearchClaw, AIRS, autoresearch, and FARS make clear that the "automated researcher" is no longer a futuristic metaphor. Some of these systems can already generate ideas, search literature, design experiments, execute code, draft papers, critique results, and iterate on their own. Whether one sees them as breakthroughs, prototypes, or cautionary signs, the implication is the same: the volume of research-like artifacts is about to increase much faster than human evaluation capacity.
This is the context in which we are launching OpenPrint.
OpenPrint is our attempt to create a more structured path for sharing research in public: not by removing evaluation, but by making verification a first-class part of the process.
It is important to be precise here. OpenPrint is not a claim that formal publishing has been "solved". It is not a replacement for journals or conferences. A paper on OpenPrint today is closer to a verification-backed preprint than to a traditionally accepted paper.
But that is precisely why we think it matters.
The research ecosystem does not only need more places to upload papers. It needs better ways to check, contextualize, improve, and share them before and alongside formal review. It needs workflows that reflect the actual shape of the problem in 2026, not the shape of the problem in 2006.
The background: research creation has accelerated, but evaluation has not
The mismatch is now severe.
On one side, content generation has accelerated across the board. Large language models, agentic tool use, multimodal systems, retrieval pipelines, code generation, and autonomous research workflows have all lowered the cost of producing plausible research artifacts. A manuscript no longer requires the same amount of manual effort it once did. That does not mean good science is easy. It means that the surface act of producing a paper has become much cheaper than the act of validating what the paper is actually saying.
On the other side, the institutions that determine legitimacy still depend heavily on scarce human time. Reviewers are unpaid and overloaded. Editorial and moderation processes are difficult to scale. Quality control is often fragmented, delayed, or opaque. In many cases, authors do not learn what is wrong with a paper until months after submission, even when the underlying issues are straightforward enough to be flagged much earlier.
This tension is becoming visible even in the most important open research infrastructure.
Consider arXiv. For many researchers, arXiv has long been the canonical place where new work appears first. It changed the speed and openness of research communication by making distribution dramatically easier. But arXiv's own recent transition makes the broader point impossible to ignore. It is now establishing itself as an independent nonprofit, while also modernizing infrastructure and adapting policies in response to growing operational pressure. At the same time, arXiv has publicly updated parts of its endorsement process to help stem the flood of problematic submissions. Even without exaggeration, the signal is clear: one of the world's most important research-sharing platforms is facing the exact pressures that a new era of scale, automation, and governance inevitably brings.
This is not criticism of arXiv. On the contrary, it is recognition of how central, and how difficult, the problem has become. arXiv solved a major problem for the internet era: open distribution of papers. But the AI era adds another layer of urgency: how do we preserve openness while strengthening verification?
That is the gap OpenPrint is trying to address.
The wrong question and the right question
Much of the public conversation around AI and science still revolves around the wrong question.
The wrong question is: "Was this written by AI?"
The more useful question is: "Is this correct, valuable, and worth attention?"
That distinction matters.
OpenPrint is designed for a world in which research outputs may be:
- written entirely by humans,
- heavily AI-assisted,
- or generated by automated research systems.
We do not think the future can be managed by pretending automated research systems do not exist. Nor do we think it is enough to label content by provenance and stop there. A manuscript does not become trustworthy because a human wrote it, and it does not become worthless because an automated system contributed to it.
What matters is the evaluation layer around it:
- Are the claims supported?
- Are the references valid?
- Is the logic coherent?
- Is the work complete enough for public sharing?
- Relative to a target standard, how strong is it?
Those are the questions OpenPrint is built to foreground.
What OpenPrint is
OpenPrint aims to be a frictionless verification-first workflow for sharing research artifacts.
In practical terms, it gives authors a structured path that starts from substantive review and moves toward a public-facing research page with verification certificates. Along the way, the work passes through a sequence of checks designed to surface problems early, create more legible quality signals, and let authors improve their work before wider dissemination.
The current flow is built around several components:
- Paper Review
A first-pass review by a specialized review agent aligned with a target venue or standard. - Dynamic Ranking
A comparative quality signal that helps authors understand the work's current standing relative to the selected standard. - Reference Check
Verification of citations and references, with clear handling of items that cannot be automatically matched. - Correctness Check
A more detailed analysis of claims, evidence, scope, reasoning quality, and overstatement risk. - Metadata and artifact preparation
Finalizing the work as a shareable research artifact, including title, authors, abstract, resources, and licensing choices. - Identity verification before release
So the resulting artifact has a stronger institutional and attribution context.

The result is not a black-box verdict. It is a more structured interaction between authors and evaluation.
That point is central. OpenPrint is not just "AI reviewing a paper". It is an attempt to turn a traditionally opaque and discontinuous process into a sequence of visible, inspectable, improvable steps.
Why the flow starts with review agents
One of the most important choices in OpenPrint is that authors choose the review agent that best fits their work as the first step.
This is deliberate.
Authors usually know the intended audience and quality standard for a paper long before any formal decision is made. A systems paper and a theory paper should not be read with the same expectations. A workshop-style exploratory contribution and a top-tier conference submission are not judged the same way. Different domains emphasize different standards of evidence, completeness, novelty, reproducibility, and presentation.
Rather than forcing all papers through one universal template, OpenPrint lets authors start from the review agent that best represents the venue or evaluation style most relevant to the work. In other words, the process is not "one model judges everything". It is "authors choose the lens that best matches the work, and the verification begins there".
This also means the scope of OpenPrint's supported domains is reflected by the review agents available on the platform. As those agents expand, so does domain coverage.
What makes OpenPrint different from a repository
A repository mainly answers the question: Can this manuscript be stored and accessed?
OpenPrint is trying to answer a different question: What can be said, in a structured and transparent way, about the current state of this manuscript, wherever it is shared?
That distinction may sound subtle, but it changes the entire product philosophy.
A repository is primarily about hosting.
OpenPrint is about verification and context. It does not have to be the place where a paper lives. Over time, it can also serve as a layer around work that is already visible elsewhere, such as on arXiv or other research platforms.
That is why the flow does not begin with a metadata form. It begins with evaluation.
That is also why the output is not "a PDF exists online". The output is a richer artifact:
- the manuscript,
- the verification steps it has gone through,
- the references and claims that have been checked,
- the standard it was compared against,
- and, over time, the interactions and revisions around it.

This is also why we are careful not to overstate what OpenPrint is today. The goal is not to imitate a journal website with new branding. The goal is to build a more useful layer between raw manuscript creation and the broader circulation of research.
Human review still matters, especially where it matters most
Another misconception we want to avoid is that OpenPrint is trying to replace human judgment altogether.
It is not.
We will introduce human review for difficult and debatable cases. That is essential, not optional.
Some research questions are inherently ambiguous. Some claims involve borderline interpretations. Some papers sit exactly at the line where experts may disagree about novelty, soundness, scope, or framing. In such cases, "full automation" can become an attractive slogan and a poor design principle.
Our view is simpler: automation should absorb scale; humans should absorb ambiguity.
That principle also shapes how false positives and false negatives are handled. These cases will continue to be dealt with case by case upon user trigger, rather than being silently buried beneath a system score. We think that is the right tradeoff for a serious research product. If authors believe a result is wrong, incomplete, or unfairly flagged, there should be a path for escalation and review.
In short: OpenPrint uses agents to make evaluation faster and more structured, but it does not pretend that every hard scientific question can be reduced to one automatic answer.
We are not excluding machine-generated papers
This deserves to be stated directly.
OpenPrint also accepts papers written or generated by automated research systems.
We are doing this because pretending those systems do not exist is no longer realistic. More importantly, excluding them at the door would miss the deeper issue. The future problem is not only "who made this paper?" but "what process should decide whether this work is coherent, grounded, and worth engaging with?"
Automated research systems are already moving from toy demonstrations to end-to-end pipelines. Some focus on idea generation and experiment planning. Others include literature review, coding, paper drafting, critique, and iteration. Some are human-on-the-loop rather than human-in-the-loop. Some expose raw logs and intermediate decisions. Some optimize for conference-style outputs. Some optimize for speed.
Whatever one thinks of their current scientific quality, they change the economics of manuscript production.
That means evaluation systems now need to be designed for abundance. OpenPrint is one attempt to do that without giving up on rigor.
Why benchmarks and transparency matter
A product like this should not ask for trust merely because it uses advanced models or agentic workflows.
Trust has to be earned.
That is why benchmarking matters. We benchmark the review agents we develop, and we publish those results transparently. If a system is going to participate in research evaluation, then its failure modes, calibration, and strengths need to be visible — not hidden behind marketing language.
This is especially important because "AI review" can mean many very different things in practice. A shallow style checker, a brittle rubric engine, a literature-grounded critique system, and a venue-specific multi-stage evaluator are not the same product. Lumping them together under one phrase is misleading.
Our ambition with OpenPrint is not to claim infallibility. It is to build a system that is useful because it is structured, benchmarked, and open to scrutiny.
What the OpenPrint workflow is really trying to do
At a higher level, OpenPrint is built around a simple but important shift:
Publishing should not be a single opaque event. It should be a sequence of legible steps.
In the traditional workflow, an author often submits into a black box:
- little visibility into how the work is being evaluated,
- delayed feedback,
- compressed and inconsistent reviewer attention,
- and limited opportunities to correct obvious issues before a high-stakes decision.
OpenPrint changes that interaction model.
Instead of submit, wait, hope, the process becomes closer to:
- choose the relevant standard,
- run structured evaluation,
- inspect the results,
- fix what can be fixed,
- clarify what needs human attention,
- and then share the work with more context around its current state.
This is not only better for authors. It is also better for readers.
Readers increasingly need more than a venue name or an institutional affiliation to decide how seriously to take a paper. They need more context. They need to know whether references were checked, whether central claims were scrutinized, whether the work has been positioned against a known standard, whether the artifact has a version history, whether discussion continues after release.
In that sense, OpenPrint is not just about helping authors prepare work. It is about making research artifacts more interpretable once they are shared.

Community feedback comes next
Research does not become meaningful only at the moment a PDF appears online.
Some of the most important evaluation happens after that point:
- replication attempts,
- criticism,
- follow-up questions,
- corrections,
- extensions,
- disagreement over interpretation,
- and informed community discussion.
That is why we plan to introduce post-release community feedback and interaction into OpenPrint. The exact design is still evolving, and we are intentionally not pretending all details are finalized. But the direction is clear: OpenPrint should become more than a one-time verification gateway. Over time, it should support a richer relationship between a research artifact and the community around it.
We think this matters because no single review moment, human or automated, can fully settle the meaning or impact of a piece of research. Good systems should therefore support ongoing interpretation, not just initial release.
Why now
The deeper reason for OpenPrint is that the old equilibrium is breaking.
When research creation becomes dramatically cheaper, the value moves to:
- evaluation,
- prioritization,
- filtering,
- correction,
- and trust.
This is true whether one is talking about AI-generated drafts, researcher-AI collaboration, conventional manuscripts, or fully automated research systems. The common denominator is not style of authorship. It is scale.
In that world, a simple hosting layer is not enough. A pure gatekeeping model is not enough either. What is needed is a verification layer: something between raw generation and wider circulation that helps the ecosystem allocate attention more intelligently.
That is the job OpenPrint is beginning to take on.
What OpenPrint is not
To avoid confusion, let us also say clearly what OpenPrint is not.
It is not:
- a replacement for journals or conferences,
- a guarantee of acceptance anywhere,
- a claim that automated systems can fully settle hard scientific disputes,
- a simple repository with AI decorations,
- or a slogan about "disrupting publishing".
It is:
- an evolving verification-first workflow,
- a path to a richer preprint-like research artifact,
- a more legible interaction between authors and evaluation,
- and an infrastructure experiment in how research dissemination should work when manuscript production scales faster than human review.
That is a more modest claim than many launch posts make. But we think it is also a more useful one.
A better question for the next era
The research world is entering a period in which more papers and research outcomes will be produced, more quickly, by a wider mix of humans and machines, than our current institutions were built to handle.
If we respond to that only by asking whether content is AI-generated, we will miss the point.
The more important question is this:
How should research be checked, improved, contextualized, and shared in a world where generation is cheap but trust is scarce?
OpenPrint is our answer so far.
Not the final answer. Not the only answer. But, we believe, a serious and necessary one.
If the old system assumed that scarcity of writing would protect the quality of what entered the world, the new system must assume the opposite: writing is abundant, and attention is the scarce resource.
That means the future of research communication will belong not only to platforms that can host papers, but to systems that can help the ecosystem decide which papers deserve careful attention — and why.
OpenPrint is our first step in building that future.
Try OpenPrint
If you have a paper in progress, a manuscript under revision, or a research artifact you want to pressure-test before wider sharing, OpenPrint is built for exactly that moment.
Start with the review agent that best matches your work. See what the system surfaces. Fix what is fixable. Escalate what is debatable. Then decide how you want to share the result.
We think many researchers will have the same reaction once they try it:
not that the system has "automated the whole process",
but that this is a much more sensible way to begin bringing research into the world.