Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Paper Copilot
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
CSPaper

CSPaper: review sidekick

Go to CCFDDL
Go to CSRankings
Go to OpenReview
  1. Home
  2. Peer Review in Computer Science: good, bad & broken
  3. 700+ Papers Caught Using AI Without Disclosure – Exposed on Nature’s Front Page. What New Peer-Review Standards Do We Need?

700+ Papers Caught Using AI Without Disclosure – Exposed on Nature’s Front Page. What New Peer-Review Standards Do We Need?

Scheduled Pinned Locked Moved Peer Review in Computer Science: good, bad & broken
undisclosed usenatureacadem-aichatbot phrasessilent correctionspeer-reviewai-assisted writingiclricmlneurips
3 Posts 1 Posters 100 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • JoanneJ Offline
    JoanneJ Offline
    Joanne
    wrote last edited by Joanne
    #1

    A bombshell just dropped.

    On April 24, 2025, Nature’s front-page news revealed that over 700 academic papers have been flagged for undisclosed use of generative AI tools like ChatGPT — many of them published by Elsevier, Springer Nature, MDPI, and other major publishers.

    b76220bb-b99a-4dcd-b85c-b86d6f073e2a-image.png

    The online tracker Academ-AI, created by Alex Glynn at the University of Louisville, is actively compiling these cases.

    Some shocking examples of chatbot signature phrases showing up inside published papers:
    • “As of my last knowledge update”
    • “As an AI language model”
    • “Regenerate response”

    In their recent paper, “Suspected Undeclared Use of Artificial Intelligence in the Academic Literature: An Analysis of the Academ-AI Dataset”, Glynn and colleagues show a steadily dramatic trend of AI content leaking into the academic literature over the past couple of years.

    3816f96e-8a12-4e97-8a9f-e4948c531129-image.png

    What’s even more concerning:

    Silent corrections are happening — meaning publishers quietly removing AI traces from already published papers without issuing retraction notices or corrections.

    Notable examples:
    • Radiology Case Reports: A paper literally contained ChatGPT’s self-explanations and got retracted.
    1b40290f-44f3-423e-8e15-cb075709b4cb-image.png
    • Lithium battery research: The introduction section started with “Certainly, here is a possible introduction for your topic…”
    6b25b622-7095-47a9-8277-d57cd41aa463-image.png
    • Stem cell signaling study: AI-generated illustrations were spotted, leading Frontiers to retract the paper.
    00349e27-1c42-45c9-9b39-d23d0a7fd0a2-image.png
    • PLOS ONE education paper: Included fabricated references, later retracted after investigation.
    797ad7f9-70bf-4a09-aac7-e7e6390347dc-image.png

    1 Reply Last reply
    0
    • JoanneJ Offline
      JoanneJ Offline
      Joanne
      wrote last edited by
      #2

      [Poll] Which shocking LLM signature phrase have you seen (or heard about) in published papers?

      1. “As of my last knowledge update”
      2. “As an AI language model, I cannot…”
      3. “Regenerate response”
      4. “Certainly! Here is a possible introduction:”
      5. “I’m sorry, but I don’t have access to real-time data.”
      6. “Let me pull that information for you!”
      7. Other (please list it)
        Please comment below.
      1 Reply Last reply
      0
      • JoanneJ Offline
        JoanneJ Offline
        Joanne
        wrote last edited by
        #3

        Where do we go from here — through the lens of the CS top-tier conference rules?

        Many flagship venues have now staked out clear positions.

        • ICML and AAAI, for instance, continue to prohibit any significant LLM-generated text in submissions unless it’s explicitly part of the paper’s experiments (in other words, no undisclosed LLM-written paragraphs).

        • NeurIPS and the ACL family of conferences permit the use of generative AI tools but insist on transparency – authors must openly describe how such tools were used, especially if they influenced the research methodology or content.

        • Meanwhile, ICLR adopts a more permissive stance, allowing AI-assisted writing with only gentle encouragement toward responsible use (there is no formal disclosure requirement beyond not listing an AI as an author).

        With that in place, what will the next phase could look like? could it be this following? :

        • One disclosure form to rule them all – expect a standard section (akin to ACL’s Responsible NLP Checklist, but applied across venues) where authors tick boxes: what tool was used, what prompt given, at which stage, and what human edits were applied.

        • Built-in AI-trace scanners at submission – Springer Nature’s “Geppetto” tool has shown it’s feasible to detect AI-generated text; conference submission platforms (CMT/OpenReview) might adopt similar detectors to nudge authors towards honesty before reviewers ever see the paper.

        • Fine-grained permission tiers – “grammar-only” AI assistance stays exempt from reporting, but any AI involvement in drafting ideas, claims, or code would trigger a mandatory appendix detailing the prompts used and the post-editing steps taken.

        • Authorship statements 2.0 – we’ll likely keep forbidding LLMs as listed authors, yet author contribution checklists could expand to include items like “AI-verified output,” “dataset curated via AI,” or “AI-assisted experiment design,” acknowledging more nuanced roles of AI in the research.

        • Cross-venue integrity task-forces – program chairs from NeurIPS↔ICML↔ACL could share a blacklist of repeat violators (much as journals share plagiarism data) and harmonize sanctions across conferences to present a united front on misconduct.

        Or… will we settle for a loose system, with policies diverging year by year and enforcement struggling to keep pace?

        Your call: Is the field marching toward transparent, template-driven co-writing with AI, or are we gearing up for the next round of cat-and-mouse?

        1 Reply Last reply
        0
        • rootR root shared this topic
        Reply
        • Reply as topic
        Log in to reply
        • Oldest to Newest
        • Newest to Oldest
        • Most Votes


        • Login

        • Don't have an account? Register

        • Login or register to search.
        © 2025 CSPaper.org Sidekick of Peer Reviews
        Debating the highs and lows of peer review in computer science.
        • First post
          Last post
        0
        • Categories
        • Recent
        • Tags
        • Popular
        • World
        • Paper Copilot