Penn State researchers are using artificial intelligence to read the hidden signals in everyday speech, aiming to catch dementia and Alzheimer’s years earlier than today’s paper tests. Their goal is to give clinicians faster, more objective tools so patients can get help sooner.
More than 7 million Americans age 65 and older are living with Alzheimer’s disease, and that number is expected to keep climbing. Yet many people are not diagnosed until their memory and thinking problems are already affecting daily life.
Researchers at Penn State say artificial intelligence could help change that timeline by listening closely to how people speak.
Hui Yang, the Gary and Sheila Bello Chair in Industrial and Manufacturing Engineering at Penn State, and doctoral candidate Kevin Mekulu have developed an AI-based framework that analyzes speech to flag early signs of dementia and Alzheimer’s. Their recent work, published in the Journal of Alzheimer’s Disease Reports and Frontiers in Aging Neuroscience, suggests that this approach could detect cognitive decline earlier and more accurately than traditional paper-and-pencil exams.
The team’s goal is not to replace doctors, but to give them a faster, more objective way to spot subtle changes that might otherwise be missed.
Traditional screening tools for dementia are usually short questionnaires or tasks that must be administered by trained staff. Yang noted that these tests can take 10-15 minutes of staff time, depend heavily on the person giving the test and often miss the earliest, most subtle changes in thinking.
At the same time, the United States faces a shortage of geriatric specialists, with roughly one geriatrician for every 10,000 older patients, and high staff turnover in many senior care facilities. That makes it difficult to offer frequent, high-quality cognitive screening to everyone who might benefit.
Yang and Mekulu see AI as a way to scale up screening without adding to the burden on clinicians.
Their system focuses on speech, one of the most complex things humans do. Speaking requires memory, attention, language skills, decision-making and motor control to work together seamlessly. Many of those systems are affected early in neurodegenerative diseases, often before family members or clinicians notice obvious problems.
Instead of relying on a clinician’s impression of how someone sounds, the Penn State framework turns speech into data. It looks for patterns in word choice, repetition, pauses, fluency and how sentences are structured. It then uses those patterns to extract what researchers call speech-based biomarkers — measurable signals that can be tracked over time.
By focusing on these hidden dynamics and transitions in speech, the AI can pick up on changes that might be too subtle for the human ear. The researchers say this could allow clinicians to see signs of trouble years before standard tests would raise a red flag.
The system is also designed to be quick. Rather than a lengthy exam, the AI can screen for potential problems in under a minute, using short, guided interactions with patients.
Mekulu emphasized that the type of AI they are building goes beyond the simple, one-shot models that are already used in some health care tools. Those systems are often described as static because they take an input, produce an output and stop there.
In contrast, the Penn State team is working with what they describe as agentic AI — systems that can plan, adapt and interact over time. In their framework, AI agents do more than score a test. They guide the conversation, adjust their prompts based on how a person responds and combine multiple types of information, such as language patterns, task performance and context, into a single assessment.
That turns screening from a one-time snapshot into a more dynamic process that better reflects how cognitive decline actually unfolds.
While speech is the starting point, Mekulu said it is only one piece of a much larger puzzle. Similar AI tools could eventually analyze eye movements, heart rate and other physiological signals, how engaged someone is with a task, fine motor movements and even how a person learns or adapts during problem-solving.
Taken together, those signals could give clinicians a more complete picture of brain health, rather than a simple pass-or-fail score on a test.
The researchers also see potential for AI agents to support care long after an initial diagnosis. These systems could monitor changes between clinic visits, alert clinicians when a patient’s condition seems to be shifting and help tailor treatment plans over time. By handling some of the routine monitoring and data analysis, AI could free up clinicians to spend more time on complex decisions and direct patient care.
To move from lab models to real-world tools, Yang and Mekulu are now testing their methods in different populations and clinical settings to make sure the AI is robust and fair. They are collaborating with Nicole Etter, an associate professor in Penn State’s Department of Communication Sciences and Disorders, and neuropsychologist Tim Brearly from Penn State Health to explore how the technology could fit into assisted living and memory care environments.
Those settings are often where the earliest, most subtle cognitive changes show up, but large-scale, objective screening tools are rarely used there. The team hopes to bridge the gap between academic research and everyday practice by validating their AI framework in the places where older adults actually live and receive care.
Additional co-authors on the work include Faisal Aqlan, an associate professor of industrial and systems engineering at the University of Louisville.
As the population ages and dementia cases rise, early detection is becoming more urgent. The Penn State team believes that listening differently — with the help of AI — could give patients and families more time to plan, access treatments and maintain quality of life.
Source: Pennsylvania State University

