Listen to Episode: Talking About Hearing Loss … and Solving the Cocktail Party Problem?
(Music: ES_Master of Moves – Dez Moran)
Gray: This is ASHA Voices. I’m J.D. Gray.
(CLIP–unintelligible voices speaking over each other as example of hearing issues in crowded room)
You know that hearing problem you can have when you’re trying to pick up just one voice in a crowd…
Gray: Did you have problems tuning into just one of those voices? We’ll speak with Nima Mesgarani about his research on a possible solution to this common cocktail-party problem.
That’s later in the show, but first, we’ll hear from two representatives from the Ida Institute. They recently unveiled a new tool called My Hearing Explained. This single sheet of paper is meant to do what they say the audiogram can struggle with: create a clear translation of hearing and hearing loss to guide a conversation between a clinician and a client.
Rutherford: If we think about hearing loss, it’s not just something that affects someone who has the hearing loss, but it also affects the people around us. So you can really think about hearing as a communication loss, and it also impacts your family and those ones that are close to us.
Gray: Stay with us, as we talk about hearing loss and the conversations we have about hearing loss.
This is ASHA Voices, I’m J.D. Gray.
(Music: ES_Typewriter Song)
Gray: Support for this episode of ASHA Voices is brought to you by the Office of Multicultural Affairs at ASHA. Celebrating 50 years of increasing diversity and cultural competence.
Support for ASHA Voices comes from the 2019 ASHA Convention. Join 15,000 of your fellow ASHA Imaginologists for ASHA’s largest in-person professional development and networking event. Registration is already open. Find out more at Convention dot ASHA dot org.
Gray: When an audiologist and a client look at the audiogram, are they getting the same information? A survey conducted by the Ida Institute, says that, on average, people with hearing loss rated their understanding of the audiogram at a 6 out of 10. That leaves plenty of room for improvement, right?
Located in Denmark, the Ida Institute is a non-profit organization focused on person-centered health care. Their new tool, My Hearing Explained, works as a bridge for audiologists and clients to connect with each other—and for clients to connect with their hearing loss.
Joining me now are two guests from the Ida Institute: Natalie Comas *(thymes with Thomas)* and Cherilee Rutherford.
Natalie is an SLP and a Project and Training Specialist with the Ida Institute. Joining us from Denmark, welcome Natalie.
Comas: Thank You
And Cherilee is a senior audiologist with the Ida Institute. On the line from South Africa, welcome Cherilee.
Rutherford: Thank you James.
Gray: Cherilee, as an audiologist, I want to start by discussing that survey data that I mentioned a few seconds ago. Where is the audiogram falling short during these surveys?
Rutherford: Oh, that’s a really great question. James. If we look at the history of the audiogram it’s really been developed to help us as audiologists or as clinicians document someone’s hearing levels. And over the years, it kind of morphed into performing other functions. So not only does it help us to document a hearing loss but we also use it to communicate the results back to other colleagues, such as referring physicians or speech language pathologists. And then somehow down the line we also started using it as a communication tool. And what we’ve uncovered is that there’s a real gap in terms of the audiogram as a communication tool and helping people to understand the results of the hearing test and what it means in their daily life and what they can do about it
Gray: When audiologists show people the audiograms of their hearing, are they seeing the same thing or for that matter, are they hearing the same things when they’re discussing the hearing loss?
Rutherford: So I think for the longest time we’ve believed that the audiogram is useful and just because it makes sense to us, we’ve assumed that it makes sense to our clients and their families. But this project has really challenged me to think about that differently and to perhaps consider for the first time that it might not be such an intuitive tool when it comes to communication.
Comas: If I can add to that as well. From our survey results, it’s actually quite interesting. It’s important always to know the why behind why we use these tools to communicate. And when we ask the audiologists in the survey they had a few statements, and if I could share them with you as well? Some audiologists told us that it’s to ‘make them understand’ ‘to understand the treatment required- They have a right to understand’ ‘to identify as a true professional’ and ‘it’s the way that they’ve been trained.’ And there were various other responses as well. But what was quite interesting, I find is this ‘to make them understand,’ and as you said at the beginning too, James, I think from our survey results, when we ask the patients, do you understand your hearing loss from the results that you have been given with the audiogram? You know, a six out of 10 [laughs] was the average across many respondents who were, who were patients. And obviously that does leave a lot of room for improvement. We think in how we can communicate best with patients about their hearing loss.
Gray: Natalie, can you maybe shine the spotlight? Where is that confusion coming from?
Comas: I think with the confusion. I think, technically, we have been trained to understand hearing loss by frequency. So the pitch of course and decibels, the loudness, and plotting it out on this graph. There is some variance in how we interpret graphs and our understanding of these. And so a big part of how we communicate now is seeing, okay, so how can we help people, professionals understand and explain the hearing results more effectively? And is there a new and exciting way to do this? And I guess that’s where it’s led us to now in terms of this innovation process and to see what can we create to help patients really understand their hearing loss and how it affects them in their daily life.
Gray: This brings us to a new tool. It’s called My Hearing Explained and it fits on the single piece of paper. If you wouldn’t mind, just tell us a little bit, like, what it would look like if you’re holding this tool and tell us how it works. Natalie
Comas: So we had described it as their personalized infographic centered around like this illustrated head. And it’s surrounded by icons for volume, clarity, and brain energy. So underneath there’s a box to fill out with the patient to answer fundamental questions of what they can hear. So, and what they struggle with and what their most important communication situations are. And there’s also a summary section for your recommendations forward, technology assist devices, communication strategies, and anything else that you feel is necessary to add.
Gray: Cherilee, can you speak to the significance of each of these different points in the circles? We have volume clarity and brain energy.
Rutherford: Yeah, sure. So when we started to think about what is the most important information that a client might be interested in when they go for a hearing test, this is what we’ve tried to capture. So someone wants to know how much can I hear and how bad is it and what can I do about it?
So really volume, clarity and brain energy allows us to zoom in on the most important factors when it comes to describing someone’s hearing loss. And then also, you know, to take things a bit further, but then also to when they go home to be able to explain in layman’s terms to their friends and their family what the results were and what it means to them.
Gray: Has that been a challenge as well- did you find in the research?
Rutherford: Absolutely. Yeah. And so we’ve talked about the results before where clients have said, you know, we rate our ability to understand the audiogram a six out of 10, but we also ask them, how do you rate your ability to explain to your friends and family what you’re hearing test results revealed? And they mark that as a five out of 10. So again if we think about hearing loss, it’s, it’s not just something that affects someone who has the hearing loss, but it also affects the people around us. So you can really think about hearing as a communication loss and it also impacts on your family and those ones that are close to us. And so it’s a very important part of person-centered care is involving friends and family into the whole hearing journey if you like. And so therefore that was a very important consideration for us as well.
Gray: Let’s talk about that a little bit. The Ida Institute takes a person-centered approach to care. Where else might we see that in this tool and in your research? How does this fit into that goal?
Comas: I think with that person-centered approach in terms of professionals, we can tend to sometimes use jargon or technical terms when we explain this to the patients.
Comas: And when we ask the patients, we had some feedback from them about the audiogram and they were saying statements like, “It’s enough for them, the professionals but not for me.” Or they’d say things like, “They really didn’t explain it or describe it and I didn’t know what I should ask.” I learned more from support groups and professionals, and I often feel like just another ear to them. So they’re the sorts of stories that we’re getting from the patients, which really obviously highlights the fact that we’re not addressing their concerns or their needs for understanding their hearing loss.
Rutherford: And if I can just add to that I think that when someone goes for hearing tests for the first time, there is a lot that happens in an appointment. It’s a busy appointment. And there are many things that is said and lots of information that is exchanged between the professional and the client and what My Hearing Explained. Also allows us to really summarize all the important things that were discussed. And like we said before, it allows for the patient to take it home review it, maybe think about things that they forgot to ask or that they might like to ask next time they can share it with their friends and family. And I think also, importantly, it’s it helps the client to have confidence in the clinician, in the sense that they can feel really heard.
Comas: And I think even though the audiograms a really valuable tool for hearing care professionals, it just hasn’t been ideal for explaining the test results to the patients. And I think the beauty about this tool is that it’s, it’s simple and it takes those difficult, complex you know, information that we can give to clients and patients and put it into language and graphics that’s really easy and intuitive to explain and understand.
Gray: Are you proposing this tool to work as a supplement to the audiogram or do you think that this is something that can completely replace the audiogram in these conversations?
Rutherford: When we, when we started off with this project, we didn’t think that it would be replacing the audiogram because the audiogram is a very important part of the clinical process. So we definitely see it as supplementing the audiogram. But I think taking a step back, it’s very important to actually ask the client you know, at the end of thetesting, as you’re going into the explanation phase, is to ask them what would you like? Would you like me to go in to great detail or would you like me to give you just a basic overview and then based on the patient’s preference for information, you as a clinician can then decide I’m going to use the audiogram or I’m going to use My Hearing explained, or maybe I’ll use a combination. You can use your clinical judgment. So it wasn’t really designed to replace the audiogram as such.
Comas: And if I can add to that as well, I guess we could call it almost like a conversation guide.
Gray: Thank you so much for your time. I appreciate it. This is a thank you so much Natalie, Natalie Comas and Shirley Rutherford of the Ida Institute, their new tool is called my hearing explained.
Gray: You can find a link to My Hearing Explained and find more information about the Ida Institute on the Leader Live blog. ASHA and the Ida Institute are collaborators. You can read more about that collaboration at ASHA dot org.
We’re going to take a quick break. When we come back, we’ll speak with Nima Mesgarani about his research into how monitoring brainwaves could help us communicate in otherwise difficult situations. I’ll explain… This is ASHA Voices.
(Music: ES_Typewriter Song)
Gray: Support for this episode of ASHA Voices is brought to you by the Office of Multicultural Affairs at ASHA, celebrating its 50th anniversary. OMA is focused on helping ASHA members address cultural and linguistic diversity in the speech-language-hearing world. Find resources to increase your cultural competence by going to ASHA dot org and searching for multicultural
Support for ASHA Voices comes from the 2019 ASHA Convention. Learn about the latest research, expand your clinical skills, discover new products, and earn continuing-education credit. The ASHA Convention will take place in Orlando from November 21st to the 23rd. Registration is open. Find out more at Convention dot ASHA dot org.
Gray: This is ASHA Voices, I’m J.D. Gray. Our next guest is a faculty member at Columbia University’s Zuckerman Institute. Nima Mesgarani is a neural engineer exploring communication and the brain’s role in how we communicate. I spoke with Nima in July. We began our conversation by discussing what is often called the cocktail party problem.
This is the problem that arises from a noisy environment where it’s hard to isolate the voice of a single speaker in a room, like at a cocktail party.
It’s a task that hearing aids struggle with.
Nima addressed this issue in a recent research article that he co-authored. It appeared in Science Advances, and I asked him if he might have the cocktail-party solution.
Mesgarani: Yes. In a controlled environment, and the tests that we did, you could, you could say that. Obviously you know, for this to become a, a real application, and to work in real world con, conditions, we still have some work to do.
Gray: Okay. So I have a clip here, and in it, you can hear your work in action. Before I play it, I want to let our audience know that it might be a little hard to understand at first. It’s intentional. The thing to listen to here is which voice is most clear. As the clip goes on, you should be able to hear the change. At the beginning of the clip, the male voice will be the clearest.
(CLIP–unintelligible voices speaking over each other as example of hearing issues in crowded room)
Gray: So we can hear the shift there. But Nima, if you can kinda break it down for us, what are we hearing?
Mesgarani: Yeah. So if I can share a little bit of the background of this work. A few years ago, we had the scientific discovery that when we were looking at a listener’s brainwave, what we discovered was that if they’re listening to people talking at the same time, their brainwave only tracks the voice of the speaker that they are attending to. And, as if the brain filters out the other interfering sources
Gray: Does it sort of mimic the brainwave, or, or how do you know it’s tracing that specific brainwave?
Mesgarani: Yeah, so if you just compare the brainwave you know when they, as they go up and down, and they, as they represent information about the sound sources, if you compare that with the voice of the, the simultaneous speakers, what you will see, is that the brainwaves? only match the voice of the speaker that the person is trying to pay attention to. And the other voices in the environment, there’s no correlation between their voices and the brainwave of the listener.
Mesgarani: So this was a, a scientific discovery a few years ago, in 2012. And this was basically the, the foundation of this current research. That the idea here is… as you mentioned, the hearing aids struggle in cocktail parties. It’s because they amplify every sound in the environment. For a listener with hearing loss, they will have difficulty if everybody is amplified. But you know, if there is no difference between the sounds that are coming from the target and interfering sources, there is nothing a hearing aid can do, unless you have this extra information that this tar—this particular speaker is target, and those other speakers are noise. So the idea here is that you can have a hearing aid that is also monitoring the brainwave of the user, and by comparing the brainwaves to the sound sources in the environment, it can automatically detect which speaker is the target, and which speakers are interference. And, consequently, it can amplify the targeted speaker, and suppress the other sounds.
Gray: Wow, and we can think of how this would be applied then in the real world, such as a cocktail party or any other event where there’s a large crowd.
Mesgarani: Yes, exactly.
Gray: From that moment then, how do we get to the clip that I just played, where you could hear the distinguishing between the voices. And where did that clip come from?
Mesgarani: Yeah so in this case, we were monitoring the brainwave of the subject as the subject was listening to two simultaneous speakers, a male and a female voice. And in the first part of that clip, the subject was trying to pay attention to the male speaker, and halfway through, the subject switched attention to the female speaker. So our system takes as an input, the brainwave of the listener, and the audio channel. The first thing that it does, it automatically separates all the voices in the environment. So, you know, in a way, we have to first solve the cocktail party in the, in an algorithm. You know, we have an algorithm that takes the sound of everybody, and mix audio, and separates them into different voices, and then it compares those separated voices to the brainwaves. And the one that matches the brainwaves the best is then amplified. And this would help the listener to, to pay attention more easily to, to the voice that they are interested in.
Gray: My understanding is it’s not in a hearing aid yet, right? There were electrodes that were used for this, correct? Placed on the brain. Tell me what the technology looks like now.
Mesgarani: Sure. Yes, in this case, because we wanted to have as clean of a neural measurement as possible, we worked with neurosurgeons in Northwell Health Hospital in Long Island. And whenever they have a epilepsy patient that for medical reasons has to have electrodes implanted in their brain, we get this opportunity to, to go to these patients and ask them if they like to participate in our research. And if they agree, then we just play them some sounds and we record their brainwaves invasively. So obviously you know these electrodes are very close to the brain and the quality of the, the neural measurement is really good. And with that you can detect the attention of the first person. Or, in other words, you can detect who the person wants to pay attention to in a matter of seconds. So, so you know, what we showed in this work, to show the upper bound of this sort of technology, we used the invasive recording. But even nowadays people are using much less intrusive and invasive neural measurements, to show that this is still possible to do.
Gray: Wow. So do you believe this technology’s gonna become wearable?
Mesgarani: I think so. I think it’s definitely going in that direction. There is a lot of effort from many research groups to, to make it as nonintrusive as possible, to improve the decoding algorithms, to try to do this as quickly and as accurately as possible, to improve those automatic switch separation algorithms to make it work in crowded places. Now I know with the combined effort that is going into this field, I think that this technology is very likely will be available in perhaps 5 to 10 years.
Gray: What was it that brought this specific issue into your focus as a neuroscientist?
Mesgarani: Yeah, I was always interested in hearing and speech. And you know, like from the early days when I was a graduate student, originally I used to work on speech technologies, you know trying to build algorithms like a Apple Siri or Google Voice. And you know at that time, these algorithms were not able to do a very good job. And that’s how I got interested in the brain, and I wanted to know how the brain solves this problem, so maybe we can get some ideas and make better algorithms that can do that. And so you know, so my research has placed me, in between these two fields: speech technology and also speech neurophysiology. Yeah, we had that scientific discovery and then, you know, the engineering part of our team got interested, you know, is it possible to make this into an application. And… and the, the effort of that, that research is what we published recently in Science Advances that we mentioned.
Gray: Is there anyone in your life that relies on the hearing aids?
Mesgarani: Yeah actually my grandmother used to wear a hearing aid. She passed away a few years go. But I remember always how isolated she was and how she had a hearing aid, but it wasn’t really helpful. And she didn’t really want to use it because it didn’t really help when there are multiple people around, and when there is a crowded environment. So I think that also had an effect, ’cause I always wanted to have a better solution for this problem.
Gray: What do you think she would say if she could see what you’re working on today?
Mesgarani: Oh I just can imagine her smiling. (Laughs) And perhaps saying some encouraging remarks and… you know, and hopefully she’ll be proud of the work that we’re doing.
Gray: Dr. Nima Mesgarani is a faculty member at Columbia’s Zuckerman Institute. His research on monitoring brainwaves and isolating voices in a crowd appeared in Science Advances in May of this year.
(Music: ES_Forest Pond With Stars – Polar Nights))
ASHA Voices is produced by the American Speech-Language-Hearing Association and comes from the team behind the ASHA Leader magazine.
Support for this episode of ASHA Voices is brought to you by the Office of Multicultural Affairs at ASHA. Go to ASHA dot org and search for multicultural, to find ways to increase your cultural competence.
Additional support for ASHA Voices comes from the 2019 ASHA Convention. Registration is open now. Find out more at Convention dot ASHA dot org.
Production assistance comes from Pamela Lorence. I’m J.D. Gray, and this is ASHA Voices.
(Music: ES_Hyperthymesia – Frank Jonsson)
Gray: You’ve probably heard the term person-centered care before, but how do you use it to help clients reach their goals?
Next time on ASHA Voices … we talk with two speech language who are dedicated to providing and sharing their support for person-centered care.
Strategies for working with people with dementia, tips for incorporating technology into treatment, and stories of successful outcomes … next time, on ASHA Voices.