A massive analysis of more than 41 million papers shows AI tools supercharge individual scientists’ productivity and impact, yet shrink the overall range of scientific questions being explored. Researchers say the next generation of AI must help create new data and directions, not just optimize what already exists.
Artificial intelligence is helping individual scientists work faster, publish more and gain influence earlier in their careers. But at the same time, it may be quietly narrowing what science as a whole pays attention to.
That is the central message of a new study led by James Evans, Faculty Co-Director of Novel Intelligence, Max Palevsky Professor of Sociology and Data Science, and director of the Knowledge Lab at the University of Chicago. By examining 41.3 million research papers, Evans and his colleagues show that AI tools are reshaping the scientific landscape in powerful and paradoxical ways.
On one hand, the benefits for individual researchers are striking. Scientists who use AI publish more than three times as many papers as their peers who do not. Their work is cited nearly five times as often, and they become recognized research leaders more than a year earlier.
Those gains reflect what many in labs and offices already feel: AI can help draft text, summarize literature, analyze data, and even generate code or models, allowing scientists to move from idea to result much more quickly.
But when the researchers zoomed out to look at the entire scientific ecosystem, a different pattern emerged. As AI adoption rises, the overall volume of distinct scientific topics shrinks. The study, published in the journal Nature, finds that AI use is associated with a 4.63% reduction in the range of topics being studied and a 22% drop in engagement between scientists across different areas.
In other words, AI is making individual scientists more powerful while making the collective enterprise more concentrated.
The team traced this contraction to where and how AI is being used. Scientists tend to deploy AI in fields with abundant, well-structured data and clear benchmarks where progress can be easily measured. Those are the places where machine learning and related tools shine, producing quick, visible advances.
That dynamic pulls attention toward data-rich domains and away from areas where data are sparse, messy or harder to standardize. Potentially promising but less “AI-ready” topics are left underexplored, even as popular areas become more crowded.
The study describes these crowded areas as “lonely crowds,” a term the authors use for hot topics that attract many papers but relatively little meaningful interaction among them. In these zones, researchers often cite the same foundational work yet develop overlapping approaches and similar solutions.
Instead of branching out into new questions or methods, scientists converge on familiar problems and tools. That convergence can speed up refinement of existing ideas, but it also risks limiting the diversity of approaches that has historically driven major scientific breakthroughs.
Evans has previously warned about this danger in his Science article titled “After Science,” where he argued that the efficiency of AI could encourage what he calls methodological monocultures. When many researchers rely on similar algorithms, datasets and evaluation metrics, the field can settle too quickly on a dominant way of thinking, leaving alternative paths unexplored.
The new study extends that concern from methods to the very scope of science. It suggests that, without careful guidance, AI may push the research community toward optimizing what is already known rather than discovering what is not yet imagined.
That does not mean AI is bad for science. Instead, the authors argue, it means the current way AI is being used tends to favor exploitation over exploration.
To counter that tendency, the study points to several policy and design opportunities. Funding agencies and institutions can encourage new data collection in underexplored areas, making them more accessible to AI tools. They can also reward work that opens up new questions or datasets, not just papers that rack up citations in already popular fields.
On the technical side, the researchers highlight the potential for AI systems that are built for exploration. The same models that excel at predicting likely outcomes can also be tuned to detect surprising patterns and rare events. That capability could help scientists notice anomalies, unexpected results or unusual combinations of ideas that might otherwise be overlooked.
The authors argue that this shift in design is essential.
“To preserve collective exploration in an era of AI use,” they write, “we will need to reimagine AI systems that expand not only cognitive capacity but also sensory and experimental capacity, enabling and incentivizing scientists to search, select and gather new types of data from previously inaccessible domains rather than merely optimizing analysis of standing data.”
In practical terms, that could mean AI tools that suggest new experiments instead of just analyzing old ones, or systems that help design instruments and protocols to capture data from phenomena that have rarely been measured before.
The study frames this as a test for the future of AI in science. If AI remains focused mainly on compressing and reusing existing data, it may accelerate short-term progress while slowly narrowing the frontier of discovery. If, instead, AI is harnessed to create new data, new questions and new ways of seeing the world, it could support a more open and sustainable form of scientific advance.
For students and early-career researchers, the findings are both a caution and an invitation. AI can be a powerful ally in building a career, but the long-term health of science will depend on those who use these tools to venture into less crowded territory.
The next wave of innovation, the authors suggest, will not come only from faster analysis of familiar datasets, but from reimagining AI as a partner in exploration — one that helps science look where it has not yet dared to look.

