AI has supercharged scientists—but may have shrunk science

16 01 2026 | 19:51Celina Zhao

As artificial intelligence tools such as ChatGPT gain footholds across companies and universities, a familiar refrain is hard to escape: AI won’t replace you, but someone using AI might.

A paper published today in Nature suggests this divide is already creating winners and laggards in the natural sciences. In the largest analysis of its kind so far, researchers find that scientists embracing any type of AI—going all the way back to early machine learning methods—consistently make the biggest professional strides. AI adopters have published three times more papers, received five times more citations, and reach leadership roles faster than their AI-free peers.

But science as a whole is paying the price, the study suggests. Not only is AI-driven work prone to circling the same crowded problems, but it also leads to a less interconnected scientific literature, with fewer studies engaging with and building on one another.

“I was really amazed by the scale and scope of this analysis,” says Yian Yin, a computational social scientist at Cornell University who has studied the impact of large language models (LLMs) on scientific research. “The diversity of AI tools and very different ways we use AI in scientific research makes it extremely hard to quantify these patterns.”

These results should set off “loud alarm bells” for the community at large, adds Lisa Messeri, a sociocultural anthropologist at Yale University. “Science is nothing but a collective endeavor,” she says. “There needs to be some deep reckoning with what we do with a tool that benefits individuals but destroys science.”

To uncover these trends, researchers began with more than 41 million papers published from 1980 to 2025 across biology, medicine, chemistry, physics, materials science, and geology. First, they faced a major hurdle: figuring out which papers used AI, a category that spans everything from early machine learning to today’s LLMs. “This is something that people have been trying to figure out for years, if not for decades,” Yin says.

The team’s solution was, fittingly, to use AI itself. The researchers trained a language model to scan titles and abstracts and flag papers that likely relied on AI tools, identifying about 310,000 such papers in the data set. Human experts then reviewed samples of the results and confirmed the model was about as accurate as a human reviewer.

With that subset of papers, the researchers could then measure AI’s impact on the scientific ecosystem. Across the three major eras of AI—machine learning from 1980 to 2014, deep learning from 2016 to 2022, and generative AI from 2023 onward—papers that used AI drew nearly twice as many citations per year as those that did not. Scientists who adopted AI also published 3.02 times as many papers and received 4.84 times as many citations over their careers.

Benefits extended to career trajectories, too. Zooming in on 2 million of the researchers in the data set, the team found that junior scientists who used AI were less likely to drop out of academia and more likely to become established research leaders, doing so nearly 1.5 years earlier than their peers who hadn’t.

But what was good for individuals wasn’t good for science. When the researchers looked at the overall spread of topics covered by AI-driven research, they found that AI papers covered 4.6% less territory than conventional scientific studies.

This clustering, the team hypothesizes, results from a feedback loop: Popular problems motivate the creation of massive data sets, those data sets make the use of AI tools appealing, and advances made using AI tools attract more scientists to the same problems. “We’re like pack animals,” says study co-author James Evans, a computational social scientist at the University of Chicago.

That crowding also shows up in the links between papers. In many fields, new ideas grow through dense networks of papers that cite one another, refine methods, and launch new lines of research. But AI-driven papers spawned 22% less engagement across all the natural sciences disciplines. Instead, they tended to orbit a small number of superstar papers, with fewer than one-quarter of papers receiving 80% of the citations.

“When your attention is attracted by star papers like [the protein folding model] AlphaFold, all you’re thinking is how you can build on AlphaFold and beat other people to doing it,” says Tsinghua University co-author Fengli Xu. “But if we all climb the same mountains, then there are a lot of fields we are not exploring.”

“Science is seeing a degree of disruption that is rare,” says Dashun Wang, who researches the science of science at Northwestern University. The rapid rise of generative AI—which is reshaping research workflows faster than many scientific institutions can keep up—only makes the stakes higher and the future shape of science less certain, he says.

But the narrowing of science may still be reversible. One way to push back, says Zhicheng Lin, a psychologist at Yonsei University who studies the science of science, is to build better and larger data sets in fields that haven’t yet made much use of AI. “We are not going to improve science by forcing a shift away from data-heavy approaches,” he says. “A brighter future involves making data more abundant across more domains.”

Further down the line, future AI systems should also evolve beyond crunching data into autonomous agents capable of scientific creativity, which could expand science’s horizons again, says study co-author Yong Li, who studies AI and the science of science at Tsinghua.

Until then, Evans says, the scientific community must reckon with how these tools have affected incentives across the board. “I don’t think this is how AI has to shape science,” he says. “We want a world in which AI-enhanced work, which is getting increased funding and increasing in rate, is generating new fields—rather than just turning the thumbscrews on old questions.”


Cover photo:  Moor Studio/istock.com

h