Understanding How Acoustic Accessibility Impacts Learning

Translating speech to learning is a complicated issue for children. Because their auditory brain does not fully develop until age 15, their brains aren’t able to fill in gaps to words they don’t hear clearly like adults can. 

In a recent webinar presented by Lightspeed, Dr. Carol Flexer, Ph.D., CCC-A, LSLS Cert. AVT, described the complexity and importance of acoustic accessibility to learning, underscoring the added challenges COVID-19-related measures such as masks and physical distancing pose for this academic year. 

 In the most basic terms, the speech has to be heard to be understood. That may seem simple, but in fact there are many ways for those speech signals to be disrupted and less intelligible, making it more difficult for children to understand them. 

 “When there’s a decrease in understanding, it uses up cognitive resources for comprehension and thinking capacity that ought to get spent for processing information rather than just receiving the information,” said Flexer, an audiologist and Distinguished Professor Emeritus at the University of Akron. “For children, the end result will be a high risk for a slower pace of learning.”  

 Connecting listening to learning 

 Children already work harder to process information. While most adults speak at 200 words per minute, children only comprehend about 125 words a minute, meaning their brain has to fill in the gaps to understand speech.  

 The quality of both audibility – or the ability for the speech to be “heard” – and intelligibility – or the ability for speech to be understood – is especially important for children, who are developing their intrinsic knowledge to process information, Flexer said. 

 As children develop their auditory brain, they require greater sound clarity, a factor that is even more critical for children grappling with other disabilities or challenges, including those whose first language is different from the language of the speaker, behavior issues or simply need to catch up on learning.  

Linking sound to speech 

 Speech intelligibility requires on the level of the speaker’s voice, the distance between the talker and the listener, the level of the listener’s hearing and any intervening objects, such as masks or face shields, that can interfere with the speaker’s voice.  

 The sound frequency of the English language also poses challenges, because words depend heavily on consonants, which carry 90 percent of intelligibility but only 10 percent of the power of speech, Flexer said. As a person raises their voice, they emphasize vowel sounds, which does not improve intelligibility, she added. 

 Flexer said use of a remote microphone can help ensure that students can hear clearly, no matter where they are in the classroom, by enabling the speaker to use an average tone that makes it easier for students to distinguish consonant sounds. 

 “They don’t have decades of life experience, so we need to make sure their brains receive high integrity information,” she said.  

 Learn more about how classroom audio environments impact learning and what schools can do to adapt by watching  Lightspeed’s webinar “Understanding and Overcoming Listening Challenges When Schools Reopen.”