New Study Reveals Limitations of AI in Detecting Human Deception

A pioneering study led by Michigan State University reveals that AI personas are not yet reliable for detecting human deception, demonstrating both promise and significant limitations in the current capabilities of AI technology.

Can artificial intelligence effectively detect when a person is lying? Michigan State University-led researchers have embarked on an ambitious study to explore this provocative question, examining the capabilities and limitations of AI in discerning human deception.

Published in the Journal of Communication, the research involved 12 experiments with over 19,000 AI participants, scrutinizing how well AI personas could differentiate truth from lies told by human subjects.

“This research aims to understand how well AI can aid in deception detection and simulate human data in social scientific research, as well as caution professionals when using large language models for lie detection,” lead author David Markowitz, an associate professor of communication in the MSU College of Communication Arts and Sciences, said in a news release.  

To carry out their analysis, the researchers utilized Truth-Default Theory (TDT), which suggests that human beings generally tend to believe others are being honest, a trait thought to be evolutionarily advantageous.

Understanding this natural “truth bias” provided a benchmark for comparing AI’s detection responses to those of humans.

“Humans have a natural truth bias — we generally assume others are being honest, regardless of whether they actually are,” Markowitz added. “This tendency is thought to be evolutionarily useful, since constantly doubting everyone would take much effort, make everyday life difficult, and be a strain on relationships.” 

The study employed the Viewpoints AI research platform, presenting AI judges with audiovisual and audio-only media of human subjects.

The AI’s task was to determine the veracity of statements and justify their decisions. Variables such as media type, contextual background, lie-truth base rates and AI personas were manipulated to evaluate their influence on detection accuracy.

One notable finding from the research was that AI exhibited a “lie-bias,” performing significantly better at identifying lies (85.8% accuracy) compared to truths (19.5% accuracy). Although AI matched human deception detection in short interrogation settings, it deviated in non-interrogation contexts, displaying a truth-bias similar to humans.

“Our main goal was to see what we could learn about AI by including it as a participant in deception detection experiments. In this study, and with the model we used, AI turned out to be sensitive to context — but that didn’t make it better at spotting lies,” added Markowitz. 

The findings underscore that AI’s current capabilities fall short of human accuracy in deception detection. This raises crucial questions about the current limitations of AI and the need for considerable advancements before it can be effectively used for such applications.

“It’s easy to see why people might want to use AI to spot lies — it seems like a high-tech, potentially fair, and possibly unbiased solution. But our research shows that we’re not there yet,” concluded Markowitz. “Both researchers and professionals need to make major improvements before AI can truly handle deception detection.” 

Timothy Levine from the University of Oklahoma co-authored the study.

Source: Michigan State University