AI Outperforms Human Crisis Responders in Empathy: New Study

University of Toronto Scarborough researchers found AI generates more compassionate responses compared to both ordinary individuals and professional crisis responders, opening discussions on its role in mental health care.

Artificial intelligence might be better at displaying empathy than humans, according to a new study conducted by researchers at the University of Toronto Scarborough.

The research, published in Communications Psychology, suggests that AI tools like ChatGPT can provide empathetic responses that are rated higher in compassion than those given by both ordinary people and trained crisis responders.

“AI doesn’t get tired,” lead author Dariya Ovsyannikova, a lab manager in Michael Inzlicht’s lab, said in a news release. “It can offer consistent, high-quality empathetic responses without the emotional strain that humans experience.”

The study included four separate experiments where participants evaluated written responses from AI, regular people and expert crisis responders.

The findings consistently showed that AI-generated responses were perceived as more compassionate and attentive to details, creating a sense of care, validation and understanding.

Empathy is crucial in various settings, particularly in mental health, where it helps individuals feel acknowledged and connected. However, expressing empathy consistently can be emotionally draining for human caregivers, leading to conditions like compassion fatigue.

Ovsyannikova, who has volunteered as a crisis line responder, highlights the emotional toll faced by professionals in the field.

Inzlicht, a professor of psychology and a co-author of the study, points out that though AI can be effective in delivering surface-level compassion, it lacks the ability to provide deeper, meaningful care necessary for resolving underlying mental health issues.

“AI can be a valuable tool to supplement human empathy, but it does come with its own dangers,” he said in the news release.

One major concern is the ethical implications of over-reliance on AI for emotional support, potentially causing individuals to withdraw from human interactions, thereby exacerbating loneliness and social isolation.

Additionally, while initial reactions to AI-generated empathy were positive, a phenomenon known as “AI aversion” showed that some participants’ preferences shifted when they learned the empathetic response came from AI.

This skepticism highlights a broader societal apprehension toward AI’s understanding of human emotions.

However, younger generations, more accustomed to interacting with AI, may show greater trust in these technologies over time.

Inzlicht emphasizes the need for a balanced approach, where AI is used to enhance but not replace human empathy.

“AI can fill gaps, but it should never replace the human touch entirely,” he added.

As the study invites further discussion, it is clear that while AI holds promise in addressing empathy shortages in mental health care, it must be integrated thoughtfully and ethically to truly benefit those in need.