Scientists from Columbia University have discovered that the brain’s handling of speech varies according to the speech’s clarity and the listener’s concentration on it. This novel understanding, which incorporates differentiated treatment of “glimpsed” versus “masked” speech, may enhance the precision of brain-operated hearing aids.
Led by Dr. Nima Mesgarani, the research team from Columbia University, USA, has disclosed that the neural processing of speech undergoes changes based on its audibility and the listener’s attention to it. The study, recently published in the open-access scientific journal PLOS Biology, employs a blend of neural data capture and computational modeling. It elucidates that in scenarios where competing sounds drown out the speech we are attending to, the encoding of phonetic information in the brain differs from situations where such overpowering auditory stimuli are absent. This research has implications for the refinement of hearing aids designed to filter and amplify only the speech that the user is focusing on.
Isolating a particular voice in a setting inundated with louder speech is challenging. Amplification of all auditory stimuli indiscriminately is not an effective solution for highlighting hard-to-discern voices. Current hearing aids that aim to amplify only the targeted speech still lack the required accuracy for functional application.
For a comprehensive understanding of auditory processing under these challenging conditions, the scientists recorded brain activities through electrodes implanted in epilepsy patients undergoing brain surgeries. The patients were tasked with concentrating on a single voice that was either louder (“glimpsed”) or softer (“masked”) relative to other voices.
The neural recordings served as the basis for the development of predictive brain activity models. These models revealed that the phonetic elements of “glimpsed” speech were encoded in both the primary and secondary auditory cortices, with enhanced encoding observed in the secondary cortex. On the other hand, the “masked” speech was encoded only if it was the speech that the individual was focusing on. Moreover, encoding of “masked” speech occurred later in time compared to “glimpsed” speech. As a result, because “glimpsed” and “masked” types of speech are separately encoded in the brain, selectively concentrating on decoding the “masked” elements could yield advancements in brain-operated systems for hearing aid control.
Vinay Raghavan, the study’s principal author, states, “In noisy environments, the brain is capable of reconstructing missed elements of speech when ambient noise levels are high. Additionally, the brain can pick up fragments of speech that are not the main focus of attention, but only when the background noise is comparatively subdued.”
Reference: “Distinct neural encoding of glimpsed and masked speech in multitalker situations” by Vinay S Raghavan, James O’Sullivan, Stephan Bickel, Ashesh D. Mehta, and Nima Mesgarani, 6 June 2023, PLOS Biology.
DOI: 10.1371/journal.pbio.3002128
Financial support for this research was provided by the National Institutes of Health (NIH), specifically the National Institute on Deafness and Other Communication Disorders (NIDCD) (Grant DC014279 to NM). The funding entities did not influence the study’s design, data collection and analysis, the decision to publish, or the manuscript’s preparation.
Table of Contents
Frequently Asked Questions (FAQs) about brain-controlled hearing aids
What is the main focus of the research conducted by scientists at Columbia University?
The primary focus of the research is to understand how the brain processes speech in noisy environments. The aim is to apply this understanding to improve the accuracy of brain-controlled hearing aids.
Who led the research team?
The research team was led by Dr. Nima Mesgarani from Columbia University in the United States.
Where was the study published?
The study was published in the open-access journal PLOS Biology.
What methods were used in the research?
The research employed a combination of neural recordings and computer modeling. The neural data was collected from epilepsy patients undergoing brain surgery. They were asked to focus on a particular voice in a noisy setting.
How could this research impact hearing aids?
The findings suggest that understanding the distinct neural encoding of “glimpsed” and “masked” speech could lead to improvements in the accuracy of hearing aids that are controlled by brain signals.
What are “glimpsed” and “masked” speech?
“Glimpsed” speech refers to a voice that is louder than other surrounding voices, while “masked” speech is quieter or drowned out by other noises. The study found that the brain encodes these types of speech differently.
What is the potential application of focusing on “masked” speech?
Focusing on deciphering only the “masked” portion of attended speech could lead to improved auditory attention-decoding systems for brain-controlled hearing aids.
Who funded the research?
The research was funded by the National Institutes of Health (NIH), specifically the National Institute on Deafness and Other Communication Disorders (NIDCD).
Did the funders influence the study?
No, the funders had no role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Who is the lead author of the study?
The lead author of the study is Vinay Raghavan.
More about brain-controlled hearing aids
- PLOS Biology Journal Article
- Columbia University Research
- National Institutes of Health (NIH)
- National Institute on Deafness and Other Communication Disorders (NIDCD)
- Neural Processing in Auditory Perception
- Advancements in Hearing Aids
- Dr. Nima Mesgarani’s Research Profile
- Vinay Raghavan’s Academic Profile
7 comments
Can someone explain glimpsed and masked speech a bit more? got a bit lost there.
Wow, this is mind-blowing stuff! Who knew the brain was this complex. Great research by Columbia uni.
This is so cool! Can’t wait to see how this impacts the hearing aid industry. Its gonna be game changer.
i’m intrigued, really! So, are we saying that the tech for hearing aids is gonna take a leap soon? Hope so.
Never thought about how hard it is to focus on a single voice in a noisy room until now. Science is amazing, isnt it?
Incredible! But I wonder how long it will take for these findings to actually make it into a product? Research is one thing, implementation’s another.
I’ve got a family member who uses hearing aids, and trust me, they could use an upgrade. Good on these researchers for looking into it.