A new AI advisor model from the University of Chicago and Argonne National Laboratory shows how humans and self-driving labs can co-pilot scientific discovery, speeding up materials breakthroughs while deepening understanding.
In the debate over whether artificial intelligence will replace scientists, a new study from the University of Chicago Pritzker School of Molecular Engineering (UChicago PME) and Argonne National Laboratory offers a different vision: humans and machines sharing the controls.
Instead of putting either people or algorithms fully in charge of the lab, the team has created an “AI advisor” that sits between them, guiding a “self-driving lab” while keeping human judgment at the center of key decisions.
The work, published Dec. 18 in the journal Nature Chemical Engineering, outlines an adaptive AI decision interface designed to help researchers discover advanced electronic materials more quickly and intelligently.
The system is built to watch over the automated discovery process and call in humans when needed, according to Jie Xu, an assistant professor at UChicago PME with a joint appointment at Argonne, who led the research.
“The advisor will perform real-time data analysis and monitor the progress of the self-driving lab’s autonomous discovery journey. If the advisor observes a decline in performance, the advisor is going to prompt the human researchers to see if they want to switch the strategy, refine the design space or so on,” Xu said in a news release.
In other words, the AI advisor crunches the data and tracks how well the experiments are going, but it does not blindly push ahead. When performance drops or the system seems stuck, it alerts scientists, who can then adjust the search strategy, change parameters or redirect the lab’s efforts.
This makes the whole process more flexible than traditional autonomous systems that commit to a single strategy from start to finish, according to Xu.
“Compared to the traditional self-driving lab where we stick with one decision strategy from the beginning to the end, this makes the entire decision workflow adaptive and boosts the performance significantly,” Xu added.
The team tested the approach on an electronic materials challenge using Polybot, a “self-driving lab” at Argonne’s Center for Nanoscale Materials. Polybot is designed to rapidly design, make and test materials with minimal human hands-on work.
In this study, Polybot focused on a mixed ion-electron conducting polymer, or MIECP, a class of materials that can move both ions and electrons. Such materials are important for technologies like flexible electronics, energy storage and bioelectronics, where devices need to conduct signals efficiently in complex environments.
By combining the AI advisor with the automated lab and human input, the researchers produced a MIECP with dramatically improved performance. The new material showed a 150% increase in mixed conducting performance compared with MIECPs made using the previous state-of-the-art technique, according to the team.
Beyond the performance jump, the system also helped uncover why the material worked better. The researchers identified two structural features that were key to boosting volumetric capacitance: larger crystalline lamellar spacing and higher specific surface area. Those insights can guide future materials design, not just for this polymer but for related systems.
That combination of better performance and deeper understanding hits both of the main goals in materials science, noted co-corresponding author Sihong Wang, an associate professor at UChicago PME.
“For material science research, there are two intercorrelated goals,” Wang said in the news release. “One is to improve the material’s performance or develop new performance. But to enable that, you need the second goal: a deep understanding about how different material design strategies, parameters and processing conditions will influence that performance. By making the entire space of the structure variation much larger, this AI model has helped to achieve two goals at the same time.”
The project also pushes back on the idea that the future of science belongs to fully self-improving AI systems that operate with minimal human involvement.
“People have been focusing a lot on self-improving AI—AI that can modify its own algorithm, generate its own data set, retrain itself and all that,” added co-corresponding author Henry Chan, a staff scientist in Argonne’s Nanoscience and Technology division. “But here we’re taking a cooperative approach where humans can play a role in the process also. We want to facilitate the collaboration between human and AI to achieve co-discovery.”
That cooperative model plays to the strengths of each partner. AI is powerful at sifting through large, complex datasets and spotting patterns that would be hard for humans to see. But it struggles when data are sparse or when intuition and experience matter most.
“While AI is excellent at this form of data analysis, it falters at decision-making when there are few data points to guide it,” Xu added.
Human researchers, by contrast, are used to making judgment calls under uncertainty, drawing on experience, creativity and domain knowledge. The AI advisor is designed to recognize when the automated system is entering a regime where human intuition is likely to be valuable and to invite that input.
The framework is meant to be broadly useful, not just for Polybot or for polymers, according to Xu.
“The methodology that we use for this study offers a generalizable framework that can be adopted by other self-driving labs,” added Xu. “But basically, we cannot promote humans in the lengthy design, fabrication and test-analysis loop. We promote human-machine collaboration to boost discovery together.”
Looking ahead, the team wants to deepen the two-way conversation between people and machines. Right now, the AI advisor mostly sends information out to humans, who may or may not act on it.
“Currently, the interaction is mostly one-way. Information is coming from the AI advisor, then humans take optional actions,” Chan added.
The next step is to let the AI learn from those human actions and adjust its own behavior over time.
“In the future, we want a tighter integration between AI and humans, where the AI can learn from human actions and modify the way it thinks in subsequent iterations, modeling the way of human decision-making,” added Chan.
If that vision pans out, future “self-driving” labs may look less like fully autonomous robots and more like collaborative partners — systems that accelerate discovery while still relying on human curiosity, insight and responsibility to steer science into new territory.
Source: The University of Chicago Pritzker School of Molecular Engineering

