Home Academia & Research Lessons Learned From the Outcomes of Children With Hearing Loss Study

Lessons Learned From the Outcomes of Children With Hearing Loss Study

by Meredith Spratford
Audiologist checking child's hearing

In this online chat, two presenters discuss implications of the study findings for aural rehabilitation. The event was sponsored by SIG 9, Hearing and Hearing Disorders in Childhood.

Meredith Spratford: Our research has shown that when we use probe microphone verification, we get a closer match to prescriptive targets compared to other methods of verification. Specifically, probe microphone measures provided better fittings (better audibility, too) compared to functional gain (soundfield thresholds). I would also ensure that the child’s aided audibility falls within the confidence interval of expected audibility for their degree of hearing loss, per the UWO (University of Western Ontario) pediatric amplification protocol.

Elizabeth Walker: To follow up on what Meredith said, the problem with functional gain is that you’re not checking that the hearing aids are set appropriately at soft or conversational speech, or that it’s comfortable at loud levels. Functional gain just tests thresholds in the sound booth, so it is only appropriate for checking cochlear implants. It’s not appropriate for verifying hearing aids where you need to take into consideration the size of the child’s ear canals and whether you’re meeting targets.

Participant: Can you explain prescriptive targets? What should the prescriptive targets be for children?

Walker: Prescriptive targets are formulas or fitting procedures for prescribing gain for hearing aids. There used to be lots of these different prescriptive targets. Now we pretty much have two—DSL (Desired Sensation Level) and NAL (National Acoustic Laboratories). DSL is used more often with children because the underlying goal of DSL is to normalize sounds across frequencies: make soft, conversational and loud sounds match those categories. The rationale is that fitting the hearing aids in this way will make speech audible to the listener, and they won’t have to “fill in the gaps” as much. Children don’t have the top-down skills to fill in gaps, and so they need to have an audibility-based prescriptive method to ensure that speech at soft and conversational levels is accessible.

Spratford: Prescriptive targets for children are higher than for adults due to the emphasis placed on ensuring audibility of speech because children lack the top-down knowledge that adults have.

Participant: How do you help parents to understand the importance of hearing technology for kids with mild hearing loss?

Walker: It’s important to describe hearing loss in terms of how much audibility the child has without hearing aids. The speech intelligibility index provides a metric to describe how much access a child has to the long-term average speech spectrum. Depending on configuration of the hearing loss, a child with mild hearing loss could have an unaided SII (Speech Intelligibility Index) of 50 to 60 percent, meaning they only have access to about half to two-thirds of the speech spectrum. And that’s in a quiet environment with the speaker close to the listener, which is not how kids typically learn.

Spratford: Haggard & Primus showed that parents did not fully appreciate the difficulty children have when using terms like “mild.” Describing the hearing loss in terms like “educationally significant” or with the SII percentage of speech may help parents to understand the implications of missing information in and out of the classroom.

Walker: Another strategy I like for parents and teachers is to use simulations to demonstrate what mild hearing loss sounds like and the impact it can have on learning. My favorite simulations are the Flintstones video on YouTube and the Unfair Spelling Test. These simulations help teachers and parents understand how difficult listening can be for kids with mild hearing loss.

Spratford: Another of our findings (Tomblin et al., 2014) shows that children’s speech and language development benefits from amplification just as much for children with mild hearing loss as children with moderate hearing loss. So while children with mild hearing loss may be developing speech and language without amplification, they may not be performing at their full potential.

Participant: What is the best way to calculate the SII?

Walker: You can use the old “count the dots” method, but that is not the best way and it is very time consuming!

Spratford: Once you enter the child’s hearing thresholds into the audioscan verifit, the speech-mapping software provides an unaided SII value for you. You don’t have to count the dots—the program does it for you! To get the aided SII value, you will want to measure the hearing aid’s output in the child’s ear using a probe microphone. The speech-mapping software will calculate an aided SII, dependent on the input level of the speech. Typically we measure SII at soft (50 dB SPL), conversational (65 dB SPL) and loud (75 dB SPL) levels.

Walker: It’s important to measure SII at all three of those levels with kids because we want to ensure optimal audibility of different levels of speech. No one encounters speech at just 65 dB (conversational level) in real life.

Spratford: You can also experiment to see what the SII values might be with different inputs (classroom teacher at different distances) using the Situational Hearing Aid Response Profile (SHARP, http://audres.org/rc/sharp/).

Participant: What are your recommendations for the school-based SLP’s first steps following a cochlear implant (CI) in older children, aged 6–14?

Walker: I would first determine whether they are wearing the cochlear implant consistently. CI use may vary after initial stimulation even for school-age children. Your CI audiologist can do data logging to see how many hours per day the child is wearing the CI. You’ll also want to determine where the child is on the auditory hierarchy—detection, discrimination, identification, comprehension?

Spratford: Regarding auditory skills, you can use a checklist like the Functional Auditory Performance Indicators (FAPI) to examine which listening-related goals the child may need to work on.

Walker: If it’s a younger child with a CI, you can also try the Early Listening Function test (ELF). This will help you determine the child’s listening bubble.

Spratford: I would also recommend touching base with the child’s educational audiologist to see what type of hands-on skills you might need to know about the child’s equipment (parts of the CI, how to troubleshoot, how to connect to the remote mic system, etc.).

Participant: How can we modify the sound-filled school/learning environment to optimize listening conditions?

Spratford: I think we want to talk about noise, reverberation and distance as different areas we can address. Often, classroom noise comes from the HVAC or through the classroom windows, not something we can readily or inexpensively modify. Remote microphone systems really are the best way to tackle noisy environments, but we also need to make sure that teachers are aware of when it’s appropriate to use them. Reverberation is easier to address—laying down carpet or adding window coverings or sound-absorbent wall/ceiling panels are all relatively easy to do.

Walker: One thing we used to recommend was to put tennis balls on desks and chairs to reduce the squeaking sound. Tennis balls aren’t recommended anymore because they can collect a lot of bacteria. There are covers that can be purchased for relatively little cost that fit on desk and table legs.

Spratford: Distance is another easy one to address with preferential seating. However, we need to do a good job in letting educators know what is appropriate for preferential seating. We want the students to have good visibility not only of the teacher, but also of the other students in the classroom. Front-and-center makes it difficult for students to see the other students in the room, so it’s better to have the student sit toward the front, but also to the side of the room. Decreasing the distance from the teacher to the student who is hard of hearing will decrease the chance of noise interfering with instruction.

Participant: Do the OCHL investigators anticipate further areas of study to come out of these results?

Walker: We’re continuing to follow the kids from the OCHL study. We’ve just finished data collection out to fourth grade. We have papers that have been accepted looking at literacy outcomes. Print knowledge is a relative strength, but reading comprehension appears to be weak compared to same-age peers. We’re also looking at vocabulary development over time and we just had a paper accepted to the Journal of Speech, Language, and Hearing Research (JSLHR) that shows that breadth of vocabulary knowledge in kids who are hard of hearing appears to catch up to peers by fourth grade, but depth of vocabulary knowledge continues to be delayed.

Spratford: One of the major benefits of the OCHL study is that we collected information on intervention. However, the intervention data is difficult to sort through since providers and service amount/type change over time. We have plans to examine the effect of early (and later) intervention dosage (amount/duration) on outcomes, which would open doors for intervention research in the future.

Meredith Spratford, AuD, CCC-A, is a staff research audiologist in the Audibility, Perception & Cognition Laboratory at Boys Town National Research Hospital. meredith.spratford@boystown.org

Elizabeth Walker, PhD, CCC-SLP/A, is assistant professor in the Department of Communication Sciences and Disorders at the University of Iowa. elizabeth-walker@uiowa.edu

Related Articles