PDC Test - Can We Try That in B Flat

You can earn 0.25 PDC by passing the exam following this article with a score of 85% or better, which has been approved for publication by NCRA's Council of the Academy of Professional Reporters.

The questions are based on the material in the article but some may require additional research. Please take these tests online by following the directions on the main JCR Article test page


Can We Try That in B Flat?

By Brad Ingrao, Au.D.

One of the most common complaints from people with hearing loss is that while their hearing technology helps them hear speech well, music “just isn’t the same.” This column will discuss why music is such a challenge for people with hearing loss as well as some real-world advice on how to take proactive steps to reduce frustration and make listening to music more enjoyable despite your hearing loss.

Describing and measuring sounds

All sounds can be described by using a few characteristics, or attributes, that can be directly measured. These include intensity, frequency, and temporal pattern.

Frequency is a measure of how many vibrations a sound creates per second. We perceive frequency as pitch. Low frequency sounds like vowels, a tuba, or the piano keys on the far left have a great deal of energy and travel in all directions. High frequency sounds like consonants, violins, or the keys on the far right of the piano keyboard have less energy and travel only in a straight line.

Intensity, which we perceive as loudness, describes how much energy a sound has and relates to how large the vibrations are. A whisper, the rustle of leaves, or a single instrument playing music marked “pianissimo” all have very low intensities. A person shouting, construction noise, and a marching band playing the finale of a Sousa march all have very high intensities.

The temporal pattern of a sound relates to the musical attribute rhythm. If you’ve ever heard an infant babbling, you can appreciate that speech has a very specific and predictable temporal pattern. If the babbler is in another room, you might think that he or she is actually talking because what’s coming out of his or her mouth has the pattern of speech even though there are no words or sentences yet.

In the realm of music, we also describe the character of “voice” of a sound using the term timbre. Timbre is how we differentiate two instruments — say a flute and a trumpet — playing the same note. The frequencies (and pitches) are the same, but the way that each instrument creates that sound gives each a different timbre.

How speech and music differ

Whenever I talk about music and hearing loss, I check with my friend, colleague, and fellow musician Dr. Marshall Chasin of the Musicians Hearing Clinics of Canada (www.musiciansclinics.com). He has spent many years researching and treating hearing loss in musicians. He reminds us that while music and speech both have the same measurable properties, they have some very specific differences. Specifically, speech and music differ because of the construction of the “instruments” and the maximum intensities they produce.

Reverberation, or the tendency of sound to bounce around a room, can be controlled by adding more sound-absorbing materials to the room, like carpets and drapes. The general term for this added sound absorption is “damping.” Sound systems with high damping create “smooth” sounds with very few “peaks” where the intensity exceeds the average intensity of the overall sample we measure (also known as RMS (or root mean square)).

Because the human vocal tract is coated with mucous membranes, it is a highly damped system. This makes speech and vocal music very smooth with relatively low, blunt peaks of energy. According to Dr. Chasin’s measurements, these are in the 12 dB range above and below the average intensity. Speech sounds also don’t generally get louder than about 87 or 90 dB.

By contrast, most musical instruments are, by design, very reflective. This makes them sound “bright” and produces sounds that travel quite a far distance. These sounds have peaks of close to 18 dB above the average. Depending on the genre, musical signals can be as intense as 110 dB.
These stark differences clearly make using hearing aids and cochlear implants designed for speech tricky with a music input.

How hearing loss affects music perception

Before we address how hearing aids and cochlear implants do and do not play nice with music, let’s take a bit of a look at why hearing loss messes up music so much. In the inner ear, we have two sets of sensory hair cells that, working together, allow us to hear and discriminate a wide range of pitches and intensities. The outer hair cells function like a pre-amplifier in a sound system, making very soft sounds loud enough for the microphones to convert to electrical signals. In the ear, the “microphone” is the inner hair cells. In addition to this pre-amplifier function, the outer hair cells work together with the brain to fine tune pitch perception.

When outer hair cells are damaged or missing, not only do we lose the ability to hear very soft sounds, but the sounds we do hear are not accurate. The pitches of similar sounds blend together and our ability to hear changes in loudness is distorted. Hearing aids can help with the soft sound audibility part and, to some extent, with the exaggerated loudness perception, but they really can’t help us regain our “fine tuning” pitch.

When hearing loss reaches the moderate to severe and worse range, not only are the outer hair cells affected, but also the inner hair cells, or the microphones of the sound system. When inner hair cells are damaged, even though sounds are loud enough to hear, they are often distorted, much like listening to a person speak into a broken microphone.

How hearing technologies interact with music

The fact that speech has a predictable pitch, loudness, and temporal pattern helps hearing aids and cochlear implants make decisions about which sounds in the environment might be speech and which ones might be noise. This really does improve the ability to understand speech in less than ideal settings, but this design “bias” can cause some issues when trying to use these technologies to listen to music.

At the very front end of both hearing aids and cochlear implants are microphones. The microphones used by both technologies can easily handle input levels greater than 100 dB; however, the next component of the system is often not so forgiving. All cochlear implants and digital hearing aids need to convert the analog sounds picked up by the microphones into digital ones and zeros in order to be processed by the digital audio processor (DSP). This analog to digital (A/D) converter often includes a feature called a limiter, which reduces the intensity of the signal from the microphone. The purpose of this limiter is to conserve battery power. There is some logic to this if we recall that speech signals usually don’t exceed 87 dB. Why would a designer waste processing time and power with sounds outside of the range of the signal everyone using the device wants to hear? The answer, of course, is that many people want to hear things that are not speech. The problem is that when hearing aids and cochlear implants apply front-end limiting, many people perceive a small amount of distortion. Because of the digital nature of these hearing systems, this distortion is faithfully carried throughout the entire audio processing pathway and is ultimately delivered to the listener. The degree to which this will impact a given individual will vary, but I’d like to share a story from my clinical experience that illustrates how significant this can be.

In 2001, I was an adjunct clinical supervisor at Northeastern University in Boston. One of my students referred a saxophone player in middle school who was hard of hearing to me for a consult. She had a severe hearing loss and was wearing digital hearing aids only a few years old. Her issue was that while she could play her saxophone well and in tune alone, whenever she was in a group, such as during band practice, she began to play out of tune. It was recommended that her parents buy her new hearing aids with a music program in them, but she wasn’t convinced that was the issue.

I had her bring her sax to the appointment, and I brought mine. She was able to play well enough, but as reported, when she tried to match her pitch to mine, she was off. As an experiment, I had her remove her hearing aids and, using the corner of the room as a baffle to acoustically amplify sound, I asked her to try again. This time, without the distortion of the front end limiter of her hearing aids’ A/D converter, she was able to match my pitch so well that I was able to teach her several songs “by ear.” My conclusion was that in the context of a full-sized concert band, the sound levels would be more than sufficient for her to hear and play. We arranged for a body-worn FM system with headphones that she would use only when the director was talking to the group.

Making it better

Since then, there have been some good efforts to make hearing technology more music friendly. Most mid- to high-end hearing aids have pre-sets for a “music” program that sets the amplifier’s characteristics to be more compatible with the acoustic signature of music. A few manufacturers have listened to the work Dr. Chasin and others are doing and are allowing the A/D converter to pass more sound intensity to the DSP. By the way, this not only helps with music perception but also with hearing very loud speech, such as when we are in areas with a great deal of background noise. If your technology doesn’t support these options, you can overcome the input limiter issue by turning down the level of the music if it is recorded or by moving farther away if it is live. Both of these techniques will decrease the amount of sound hitting the microphones, and they can prevent the input limiter from kicking in.

Technology aside, it is possible to practice listening to music — much like new hearing aid and cochlear implant users who undergo aural rehabilitation. Like speech-based aural rehab, rebuilding your musical sound vocabulary will take time, and it needs to be built from the ground up. Rather than listening to your favorite symphony the first day you get your hearing aids, listen to something simpler, preferably something more speech-like.  Folk music is a great choice as are ballads with a small combo rather than a full orchestra.

When to find a different drummer

The hardest thing to come to terms with in the realm of hearing loss is arriving at a place of balance between hope for improvement and acceptance of realistic expectations. For some, advanced technology and hard work   will allow them to enjoy most, if not all, of the music they enjoyed before their hearing loss. For others, however, it may be necessary to let go of the past and rediscover music from scratch. If your hearing loss prevents you from hearing fine gradations of pitch, try listening to music that has a simpler structure.

Explore all the instruments both live and in recordings, and find the ones that fit your hearing loss rather than trying to squeeze the square peg of your hearing loss into a round musical hole.

This is, of course, much easier said than done. It requires creativity, time, and resourcefulness. Most of all, it requires a willingness to accept the fact that while music “isn’t the same” since your hearing loss, that isn’t necessarily a bad thing. Lean on the support of friends, family, and your local and national HLAA contacts.

Above all, take a step back and ask yourself why music moves you. Is it the beat? The melody? The words? Once you find the core of your love of music, seek opportunities to experience that part of it with styles and instruments that you hear as well as possible. Then just listen, relax, and let the music move you. Dance to your own drum like no one is watching, and enjoy!

Brad Ingrao, Au.D., has been an audiologist for 20 years and has been surrounded by hearing loss all of his life. Dr. Ingrao is in private practice in Sarasota, Fla., and can be reached by e-mail at bingrao@e-audiology.net. You can follow him on Twitter @DocOtoblock. This article was developed under a grant from the Department of Education, NIDRR grant number H133E080006. However, the contents do not necessarily represent the policy of the Department of Education, and you should not assume endorsement by the federal government. This article is reprinted with permission from the March/April 2013 Hearing Loss Magazine. Visit the Hearing Loss Association of America at www.hearingloss.org.

This article was published in the September 2013 JCR

For additional information on music and hearing loss:




PDC TEST: Can we try that in B Flat?

1.   In music, ___________ normally refers to the variation and contrast in force or intensity.  

A) Dynamics
B) Crescendo
C) Diminuendo
D) Tempo

2.  The resonance by which the ear recognizes and identifies a voiced speech sound is:

A) Parlando
B) Timbre
C) Octavo
D) Semitone 

3.  An echo is an example of:

A) Reverberation
B) Ricochet
C) Recoil
D) Recherché 

4.  Spoken language generally doesn’t rise past the level of:

A) 110 dB
B) 12dB
C) 87 dB
D) 100 dB 

5.  A measure of how many vibrations a sound creates per second is:

A) Regularity
B) Oscillation
C) Syncopation
D) Frequency 

6.  The cochlear sensory cells are also known by the term:

A) Photoreceptor cells
B) Amacrine cells
C) Inner hair cells
D) Basal cells 

7.  A device used to deflect or regulate flow or passage of sound is known as a/an:

A) Baffle
B) Audiometer
C) Cochlear implant
D) Amplifier 

8.  Words spoken sotto voce is an illustration of this musical term:

A) Fortissimo
B) Pianissimo
C) Portamento
D) Furioso 

9.  In music, forte means:

A) Loud
B) Soft
C) Fast
D) Slow 

10.  What does dB mean as used in the context of this article?

A) Database
B) Double Bass
C) Dynamic  Bass
D) Decibel 

11.  In the ear, what functions as the “microphone”?

A) Eustachian tube
B) Inner hair cells
C) Hammer
D) Anvil

12.  In the context of this article, Au.D. means the author has a doctorate in audiology.

A) True
B) False

13.  The difference between speech and music as an input to hearing aids is sound level.

A) True
B) False

14.  Speech has an unpredictable tone, volume, and rhythm.

A) True
B) False

15.  RMS in the context of this article means:

a) Root mean square
b) Remote monitoring system
C) Rapid musical support
D) Ratio musical score


16.  The Hearing Loss Association of America estimates there are approximately ______________ Americans with hearing loss.

A) 42 million
B) 36 million
C) 32 million
D) 17 million 

17.  Many people with hearing loss experience distortion while listening to music due to the design limitations in various hearing aids and cochlear implants.

A) True
B) False 

18.  DSP in the context of this article means:

A) Disability sound processor
B) Digital sound processing
C) Deaf synchronized processor
D) Dedicated sound processing

19.  The outer hair cells perform similar to a _____________ in a sound system.

A) Pre-amplifier
B) Sub-Woofer
C) Sound mixer
D) Tweeter 

20.  To “damp” in the context of this article means:

A) To boost the volume of sound
B) To curb the vibration of sound
C) To enhance the pitch of sound
D) To improve the tempo of sound