Study finds surge in AI use in scientific papers but little disclosure by researchers

  • A study analyzing over 5.2 million academic papers from 2021 to 2025 found a sharp rise in the use of AI tools like ChatGPT in scientific writing across all disciplines.
  • About 70% of journals now have AI policies, most of which require authors to disclose when generative AI tools are used in research or manuscript preparation.
  • Despite these rules, only 76 out of about 75,000 papers (0.1%) published since 2023 disclosed AI use, revealing a major “transparency gap.”
  • The study also found higher AI use among researchers from non-English-speaking countries and rapid growth in fields such as physics, particularly in open-access journals.
  • Researchers say current policies are ineffective and call for new ethical frameworks and stronger guidance to ensure responsible and transparent use of AI in science.

A new analysis of more than 5.2 million academic papers has revealed a sharp rise in the use of artificial intelligence (AI) tools in scientific writing across all disciplines, even as most researchers fail to disclose their reliance on the technology despite journal policies requiring transparency.

The study, led by data scientist Yi Bu of Peking University, examined papers published between 2021 and 2025 in more than 5,000 scientific journals. The research team analyzed articles listed in the OpenAlex database to assess how widely AI writing tools, such as ChatGPT, are used in academic publishing.

Their findings show that the use of AI-assisted writing has grown dramatically since 2023, reflecting the rapid adoption of generative AI technologies by scientists worldwide.

To assess how journals are responding to the rise of AI-assisted writing, the researchers examined editorial policies regarding AI use and compared them with actual author disclosures and indicators of AI-generated text. They found that roughly 70% of scientific journals have now adopted policies addressing the use of generative AI in manuscript preparation. Most of these guidelines require authors to disclose when AI tools are used during research or while drafting a paper.

For example, IOP Publishing, the publisher of Physics World, allows authors to use AI tools in a “responsible and appropriate” way but urges them to be transparent about any generative AI used in research or writing. Despite these guidelines, the study found very little evidence that researchers are openly acknowledging AI assistance. In a detailed full-text analysis of about 75,000 papers published since 2023, only 76 articles, roughly 0.1%, explicitly disclosed the use of AI writing tools.

The researchers describe this discrepancy as a “transparency gap” between the actual use of AI and the willingness of authors to report it.

Perhaps more strikingly, the study found no meaningful difference in AI usage between journals that require disclosure and those that do not. This suggests that many researchers are simply ignoring disclosure rules, even when journals explicitly mandate them.

Authors urge a reevaluation of ethical frameworks to support responsible AI use in science

The analysis also revealed geographic and disciplinary trends. Scientists from non-English-speaking countries were more likely to rely on AI writing tools than researchers whose first language is English. The researchers suggest that generative AI may help these authors improve grammar, clarity and readability in manuscripts intended for international journals.

The fastest growth in AI-assisted writing was observed in physics and related fields, where large volumes of technical writing and global collaboration may encourage the use of automated tools.

Additionally, the study found that AI adoption is increasing particularly quickly in journals with high levels of open-access publishing, where rapid publication cycles and broader global participation may contribute to greater use of AI-assisted writing.

In line with this, the authors are now calling for a reassessment of current ethical frameworks to better guide the responsible use of AI in scientific research. They argue that simply banning AI or requiring disclosure is not enough to regulate its use, noting that their findings show many researchers are not following existing policies.

Instead of relying on “opposition and resistance,” the researchers suggest that institutions should focus on “proactive engagement and innovation” to ensure that AI technologies are integrated in ways that genuinely strengthen the scientific process. BrightU.AI‘s Enoch also suggests that stronger guidance and clearer enforcement may be needed to ensure that AI use in academic writing is openly acknowledged.

“Our findings suggest that current policies have largely failed to promote transparency or restrain AI adoption. We urge a reevaluation of ethical frameworks to foster responsible AI integration in science,” the authors wrote.

Watch this video about Elon Musk launching the Grok 4 AI chatbot.

This video is from the newsplusglobe channel on Brighteon.com.

Sources include:

Phys.org

Pnas.org

PhysicsWorld.com

BrightU.ai

Brighteon.com

Read full article here