A new study by the Hebrew University, published Friday in the journal Nature Human Behaviour, has developed a unified computational framework to explore the neural basis of human conversations.
The peer-reviewed study was led by Dr. Ariel Goldstein, from the Department of Cognitive and Brain Sciences and the Business School at the Hebrew University of Jerusalem in collaboration with the Hasson Lab at the Neuroscience Institute at Princeton University, and Dr. Flinker and Dr. Devinsky from the NYU Langone Comprehensive Epilepsy Center.
The research bridged acoustic, speech, and word-level linguistic structures, offering unprecedented insights into how the brain processes everyday speech in real-world settings.
Looking at brain activity during communication
The study recorded brain activity over 100 hours of natural, open-ended conversations using a technique called electrocorticography (ECoG). To analyze the data collected, the team used the speech-to-text model ‘Whisper,’ which helps break down language into three levels: simple sounds, speech patterns, and the meaning of words. These layers were then compared to brain activity using advanced computer models.
After analyzing the data, the researchers found that the framework could predict brain activity with great accuracy. Even when applied to conversations that were not part of the original data, the model correctly matched different parts of the brain to specific language functions.
The study also found that the brain processes language in a sequence. Before speech, the brain moves from thinking about words to forming sounds. Likewise, after listening, the brain works backward to make sense of what was said. The framework used in this study was more effective than older methods at capturing these complex processes.
“Our findings help us understand how the brain processes conversations in real-life settings,” said Dr. Goldstein. “By connecting different layers of language, we’re uncovering the mechanics behind something we all do naturally—talking and understanding each other.”
This research has potential practical applications, from improving speech recognition technology to developing better tools for people with communication challenges. It also offers new insights into how the brain makes conversation feel so effortless, whether it’s chatting with a friend or engaging in a debate.