Most Patients Trust Doctors Over AI, but Welcome Cancer-Detecting Tech

New national surveys show Americans are wary of letting AI diagnose their illnesses alone but are optimistic about AI tools that help doctors spot cancer earlier. Even brief exposure to AI appears to boost trust and excitement about its role in health care.

Most Americans are not ready to let artificial intelligence diagnose their illnesses on its own, but they are hopeful about AI tools that can help doctors catch cancer earlier, new research shows.

In two nationally representative surveys, researchers found that people overwhelmingly trust human doctors more than AI systems for health decisions. Yet when participants were shown a specific example of an AI tool that helps detect early signs of cervical cancer, many shifted toward seeing AI as promising rather than threatening.

The study, presented this week at the Society for Risk Analysis annual meeting in Washington, was led by Michael Sobolev, a behavioral scientist in the Schaeffer Institute for Public Policy & Government Service at the University of Southern California, and Patrycja Sleboda, an assistant professor of psychology at Baruch College, City University of New York. Their work focuses on how people think and feel about AI in medicine, especially in cancer diagnosis, where the technology is already being tested and used.

The team set out to measure public attitudes toward AI in health care across several dimensions: trust, understanding, perceived potential, excitement and fear. They also examined how those attitudes differ by age, gender and education level.

One clear pattern emerged: human expertise still comes first. Only about 1 in 6 respondents, or 17%, said they trust AI as much as a human expert to diagnose health problems.

At the same time, the surveys suggest that familiarity with AI makes a difference. People who had used tools such as ChatGPT in their personal lives tended to feel more positive about AI in medicine. They reported greater understanding of the technology and more excitement and trust about its use in health care.

Overall, 55.1% of respondents said they had heard of ChatGPT but not used it, while 20.9% had both heard of and used it.

That pattern fits with what researchers know about how people respond to new technologies, according to Sleboda.

“Our research shows that even a little exposure to AI—just hearing about it or trying it out—can make people more comfortable and trusting of the technology. We know from research that familiarity plays a big role in how people accept new technologies, not just AI,” Sleboda said in a news release.  

The first survey asked participants whether they had heard of or used AI technologies and then gauged their general trust in AI for health diagnoses.

The second survey went a step further. Participants were given a short description of a real-world development: an AI system that analyzes digital images of the cervix to detect precancerous changes, a method known as automated visual evaluation. This kind of tool is being explored as a way to improve cervical cancer screening, especially in settings where access to specialists is limited.

After reading about the tool, participants rated five aspects of their acceptance of it on a 1-to-5 scale: understanding, trust, excitement, fear and perceived potential.

When the researchers analyzed the results, perceived potential came out on top. Among the five elements, participants rated the tool’s potential highest, followed by excitement, trust, understanding and, lastly, fear. In other words, when people pictured a concrete cancer-detecting application, they tended to see more upside than risk.

Sobolev said the contrast between broad, abstract views of AI and reactions to a specific medical use case stood out.

“We were surprised by the gap between what people said in general about AI and how they felt in a real example,” Sobolev, who leads the Behavioral Design Unit at Cedars-Sinai Medical Center in Los Angeles, said in the news release.

The surveys also revealed demographic differences. Participants who identified as male and those with a college degree were more likely to express trust, excitement and a sense of potential about AI in health care. They also reported lower levels of fear about AI overall.

Those patterns matter because AI is rapidly moving into clinical settings, from reading medical images to helping flag patients at risk of complications. Public trust will influence how quickly these tools are adopted and how comfortable patients feel when they encounter them.

The study suggests that one way to build that trust is to move beyond buzzwords and focus on specific, understandable examples of how AI can help. Sobolev said the cervical cancer scenario appeared to make AI feel more concrete and less mysterious to participants.

“Our results show that learning about specific, real-world examples can help build trust between people and AI in medicine,” added Sobolev.

Source: Society for Risk Analysis