Neural coding of complex sounds
Real-world acoustic signals such as speech and music have energy over a broad range of frequencies. Cochlear mechano-electrical signal transduction is characterized by level- and frequency-dependent non-linearities. These non-linearities have profound implications for the neural coding of complex sounds even at the level of the auditory nerve. Furthermore, diverse groups of neurons distributed across several brainstem nuclei perform distinct signal processing tasks on complex signals, to extract and efficiently code meaningful features. This processing often involves across-frequency interactions between neural excitation and inhibition.
Coding of complex sounds
We have previously studied the neural coding of synthetic speech sounds and complex sounds evoking a musical pitch percept in the discharge patterns of neurons in the cochlear nucleus. A major focus of that work has been the effects of sub-optimal listening conditions in neural coding (e.g., presence of background noise and reverberation).
Figure 1: Coding of the (ambiguous) pitch of complex sounds
This figure shows the neural representation of the pitch of a class of complex sounds known as "iterated rippled noise." The representation is based on the timing of neural activity. When the sound is manipulated to make the pitch percept ambiguous (i.e., two different pitch percepts are equally likely for the same sound), the neuron's representation is also ambiguous with two pitch matches equally probable from the temporal activity pattern.
Figure 2: Coding of speech sounds in the presence of reverberation and frequency modulation
This figure shows the neural representation of synthetic vowel sounds. Two vowels are present simultaneously. Under anechoic (no reverberation) listening conditions neuronal processing is able to parse the acoustic mixture into the two separate vowels. When reverberation and intonation (frequency modulation) interact, that neural processing task becomes increasingly difficult, and the resulting separation of the mixture is degraded.
(Some of) our (broad) questions in coding of complex sounds
- What is the neural basis of musical pitch, and where does it emerge?
- How can acoustic context influence the neural coding of speech and music?
- How does neural coding of complex sounds relate to behavior?
Further reading
- Sayles M, Winter IM (2008). Reverberation challenges the temporal representation of the pitch of complex sounds. Neuron 58:789-801.
- Sayles M, Winter IM (2008). Ambiguous pitch and the temporal representation of inharmonic iterated rippled noise in the ventral cochlear nucleus. J Neurosci. 28:11925-38.
- Sayles M, Stasiak A, Winter IM (2015). Reverberation impairs brainstem temporal representations of voiced vowel sounds: challenging "periodicity-tagged" segregation of competing speech in rooms. Front. Sys. Neurosci. 8:248.
- Sayles M, Stasiak A, Winter IM (2016). Neural segregation of concurrent speech: effects of background noise and reverberation on auditory scene analysis in the ventral cochlear nucleus. Adv. Exp. Med. Biol. 894:389-97.