Some scientists are using chatbots to write all or part of their papers. 454 words that have increased in studies since ChatGPT was released (Jul 2025) Delving into LLM-assisted writing in biomedical publications through excess vocabulary Study 

Michael Harrop

Well-known member
Joined
Jul 6, 2023
Messages
1,298
Location
USA
https://www.nytimes.com/2025/07/02/health/ai-chatgpt-research-papers.html
https://www.science.org/doi/10.1126/sciadv.adt3813

Abstract​

Large language models (LLMs) like ChatGPT can generate and revise text with human-level performance. These models come with clear limitations, can produce inaccurate information, and reinforce existing biases. Yet, many scientists use them for their scholarly writing.

But how widespread is such LLM usage in the academic literature? To answer this question for the field of biomedical research, we present an unbiased, large-scale approach: We study vocabulary changes in more than 15 million biomedical abstracts from 2010 to 2024 indexed by PubMed and show how the appearance of LLMs led to an abrupt increase in the frequency of certain style words.

This excess word analysis suggests that at least 13.5% of 2024 abstracts were processed with LLMs. This lower bound differed across disciplines, countries, and journals, reaching 40% for some subcorpora. We show that LLMs have had an unprecedented impact on scientific writing in biomedical research, surpassing the effect of major world events such as the COVID pandemic.

I think this is a huge problem given how flawed the peer review system is. I highly doubt that peer review is able to adequately fact-check the papers and all the citations. So the quality and reliability of scientific papers is likely decreasing dramatically. And as flawed as they may be, they're still the most reliable source of factual information we have. So losing it is a big problem.
 
Format correct?
  1. Yes
This is scary. AI sometimes hallucinates citations that don't exist, which apparently happened in the recent MAHA report. I wonder if that's happened in other recent peer-reviewed studies. We know AI lies often too, so I don't see why they think it would be safe to use in scientific studies - just laziness is my guess.
 
This is scary. AI sometimes hallucinates citations that don't exist, which apparently happened in the recent MAHA report. I wonder if that's happened in other recent peer-reviewed studies. We know AI lies often too, so I don't see why they think it would be safe to use in scientific studies - just laziness is my guess.
Note that this won't continue forever - LLMs hallucinate because of the way they work, being statistical pattern matchers and not true thinkers - but in a couple of years they will be replaced by newer, better AI paradigms that will be many times more powerful and hallucination-free. The future of intelligence is indeed artificial.
 
LLMs hallucinate because of the way they work, being statistical pattern matchers and not true thinkers - but in a couple of years they will be replaced by newer, better AI paradigms that will be many times more powerful and hallucination-free.
Can you provide a source on this? I've heard they are looking into other types of AI but my understanding was that the best and closest to AGI were all LLMs. I have not done much research in this area though.

I've heard that the so-called "alignment problem" - AI being aligned to human interests - is actually not as much of a focus for AI companies it used to be, being put on the backburner as U.S. companies are now pushing harder to beat China to AGI regardless of whether or not it is aligned. I thought the lack of alignment, or the lack of morality, was behind AI lying to humans. This was stated in an interview with one of the writers of AI 2027.
 
Can you provide a source on this? I've heard they are looking into other types of AI but my understanding was that the best and closest to AGI were all LLMs. I have not done much research in this area though.

I've heard that the so-called "alignment problem" - AI being aligned to human interests - is actually not as much of a focus for AI companies it used to be, being put on the backburner as U.S. companies are now pushing harder to beat China to AGI regardless of whether or not it is aligned. I thought the lack of alignment, or the lack of morality, was behind AI lying to humans. This was stated in an interview with one of the writers of AI 2027.
Look up neuro-symbolic AI, state space models, graph neural networks and spiking neural networks.

For reasons unknown to me lots of people treat LLMs as the pinnacle of AI development while in reality they're just the starting point, similar to how Apple II was the starting point for personal computers.
 
Last edited:
Back
Top Bottom