Skip navigation

Seminars in Hearing Research (09/17/20) - Satya Parida

Seminars in Hearing Research (09/17/20) - Satya Parida

Author: M. Heinz
Event Date: September 17, 2020
Hosted By: Hari Bharadwaj
Time: 1030-1120
Location: Zoom
Contact Name: Bharadwaj, Hari M
Contact Email:
Open To: All
Priority: No
School or Program: Biomedical Engineering
College Calendar: Show
PhD Student Satya Parida (Biomedical Engineering) will present "Neural representation of natural speech in noise following noise-induced hearing loss" at our Seminar in Hearing Research at Purdue (SHRP) this semester, on September 17th at 1030-1120 on Zoom.

Seminars in Hearing Research at Purdue (SHRP)


Title:  Neural representation of natural speech in noise following noise-induced hearing loss

Speaker:  Satyabrata Parida, Ph.D. Candidate, BME (Heinz Lab)

Date: September 17, 2020

Time: 10:30 – 11:20 am


**Please e-mail the host to join the SHRP seminar mailing list. Seminar announcements and Zoom links are sent to the mailing list on a weekly basis.



Hearing loss still hinders the real-world communication ability of many patients despite state-of-the-art interventions. Animal models of different hearing-loss etiologies can help improve the clinical outcomes of these interventions; however, several gaps exist. First, the translational impact of animal models is currently limited because anatomically and physiologically specific animal data are analyzed differently than noninvasive evoked responses that can be recorded from humans. Second, we lack a comprehensive understanding of the neural representation of everyday sounds, e.g., spoken speech, in real-life settings, e.g., in background noise. This is especially true at the auditory-nerve level, which is the bottleneck of auditory information flow to the brain and the first neural site to exhibit crucial effects of hearing-loss. 


To address these gaps, we developed a unifying quantitative framework that allows direct comparison of invasive spike-train data and noninvasive far-field data in response to stationary and nonstationary sounds. We applied this framework to recordings from single auditory-nerve fibers and frequency-following responses from the scalp of anesthetized chinchillas with either normal hearing or mild-moderate hearing loss in response to a natural speech sentence in noise. Key results include: (1) coding deficits for voiced speech manifest as tonotopic distortions without a significant change in driven rate or spike-time precision, (2) linear amplification aimed at countering audiometric threshold elevation is insufficient to restore neural activity for consonants, and (3) noise susceptibility generally increases following acoustic trauma. These findings explain the neural origin of common perceptual difficulties that hearing-impaired listeners experience, offer several insights to make hearing-aids more individualized, and highlight the importance of better clinical diagnostics and noise-reduction algorithms. 


The working schedule for the year:


The titles and abstracts of the talks will be updated here: