New Study Reveals Hidden Brain Functions That Decode Speech Melody

A new study by Northwestern University, University of Pittsburgh and University of Wisconsin-Madison reveals that Heschl’s gyrus processes the melody of speech, challenging long-held assumptions about how the brain interprets pitch changes in conversation.

You’ve always heard that it’s not just what you say but how you say it, and now there’s scientific evidence to back it up. In a new multidisciplinary study, researchers from Northwestern University’s School of Communication, the University of Pittsburgh and the University of Wisconsin-Madison have discovered that a brain region, known primarily for early auditory processing, plays a far more significant role in interpreting speech nuances than previously understood.

The study, published in Nature Communications, identifies that the Heschl’s gyrus transforms subtle pitch variations, known as prosody, into meaningful linguistic information. These findings redefine long-standing beliefs about how, where and how quickly prosody is processed in the brain.

“The results redefine our understanding of the architecture of speech perception,” Bharath Chandrasekaran, the study’s co-principal investigator and professor and chair of the Roxelyn and Richard Pepper Department of Communication Sciences and Disorders at Northwestern, said in a news release. “We’ve spent a few decades researching the nuances of how speech is abstracted in the brain, but this is the first study to investigate how subtle variations in pitch that also communicate meaning are processed in the brain.”

The Participants

Chandrasekaran and Taylor Abel, chief of pediatric neurosurgery at the University of Pittsburgh School of Medicine, collaborated on the study by examining 11 adolescent patients undergoing neurosurgery for severe epilepsy.

Each patient had electrodes implanted deep within their cortical regions critical for language functions, providing unprecedented precise readings of brain activity.

“Typically, communication and linguistics research rely on non-invasive recordings from the surface of the skin, which makes it accessible but not very precise,” Abel added. “A collaboration between neurosurgeon-scientists and neuroscientists, like ours, allowed us to collect high-quality recordings of brain activity that would not have been possible otherwise, and learn about the mechanisms of brain processing in a completely new way.”

The Melody of Speech

While patients listened to an audiobook recording of “Alice in Wonderland,” the researchers tracked brain activity in real time.

They discovered that the Heschl’s gyrus processed subtle pitch changes, encoding them as meaningful linguistic units separately from the sounds that form words.

“Our study challenges the long-standing assumptions of how and where the brain picks up on the natural melody in speech — those subtle pitch changes that help convey meaning and intent,” added co-first author G. Nike Gnanateja, an assistant professor in UW-Madison’s Department of Communication Sciences and Disorders. “Even though these pitch patterns vary each time we speak, our brains create stable representations to understand them.”

The research also found that prosodic contours — the rise and fall of speech — are encoded much earlier in auditory processing than previously thought.

Implications for the Future

This discovery has profound implications for various fields, including speech rehabilitation, artificial intelligence and our overall understanding of human communication.

“Our findings could transform speech rehabilitation, AI-powered voice assistants and our understanding of what makes human communication unique,” Chandrasekaran added.

The study’s insights into early prosodic processing could pave the way for new interventions for speech and language disorders, such as autism, stroke-related dysprosody and language-based learning differences.

Additionally, understanding how prosody is processed could improve AI-driven voice recognition systems by enabling them to better handle pitch variations, making natural language processing more human-like.

The research, supported by multiple NIH grants, marks a major step forward in our understanding of the brain’s intrinsic complexity and its impact on human speech perception and communication.

 Source: Northwestern Now