Seminars in Hearing Research at Purdue
Abstracts
Talks in 2020-2021
[Zoom Meetings: Thursdays 1030-1120am]
Link to Schedule
August 27, 2020
Joshua M. Alexander, Ph.D., Associate Professor, SLHS
Validity of the peak height insertion gain (PHIG) for quantifying acoustic feedback in hearing aids
***POSTPONED until Oct 1st.
September 3, 2020
Hari Bharadwaj, Assistant Professor, SLHS/BME
Initial Forays into Web-based Psychoacoustics
Contributors: Brittany A. Mok (SLHS), Vibha Viswanathan (BME), Agudemu Borjigin (BME), Ravinderjit Singh (BME)
Web-based experiments offer the potential to collect large datasets from diverse cohorts of listeners, and can help circumnavigate constraints on in-person testing placed by the COVID-19 pandemic. Here, we (1) describe our infrastructure for multipart browser-based hearing studies with anonymous paid participants, (2) outline our approach to screening participants for headphone use and “normal-hearing” status, and (3) compare (very) preliminary performance trends in the same task paradigms between lab- and web-based testing.
September 10, 2020
Guochenhao Song, Graduate Student, Herrick Acoustics Lab (w/ Profs. Patricia Davies and Yangfan Liu)
Refinements for Tone-to-Noise Ratio in an Annoyance Model for Tonal Office Noises
September 17, 2020
Satyabrata Parida, Ph.D. Candidate, BME (Heinz Lab)
Neural representation of natural speech in noise following noise-induced hearing loss
Hearing loss still hinders the real-world communication ability of many patients despite state-of-the-art interventions. Animal models of different hearing-loss etiologies can help improve the clinical outcomes of these interventions; however, several gaps exist. First, the translational impact of animal models is currently limited because anatomically and physiologically specific animal data are analyzed differently than noninvasive evoked responses that can be recorded from humans. Second, we lack a comprehensive understanding of the neural representation of everyday sounds, e.g., spoken speech, in real-life settings, e.g., in background noise. This is especially true at the auditory-nerve level, which is the bottleneck of auditory information flow to the brain and the first neural site to exhibit crucial effects of hearing-loss.
September 24, 2020
Brandon Coventry, Ph.D. Candidate, BME (Bartlett Lab)
Towards closed-loop optical control of auditory thalamocortical circuits using deep reinforcement learning
Closed-loop neuromodulation, also called intelligent neural control, has become the holy grail of both neuroprostheses and brain computer interfaces for its promise to sense specific physiologic states and take targeted therapeutic actions when certain conditions are met. However, current closed loop systems are limited to primarily simple threshold measures only allowing limited control of neural circuits. A method that can learn relevant physiologic states would allow for more fine-tuned control of the neural circuit under control, having direct therapeutic use for both hearing restoration and cochlear implants as well as Parkinson’s disease and other neurological diseases and disorders. One such method exists in the class of machine learning algorithms known as reinforcement learning (RL). In RL tasks, the system under study is treated as a game environment where certain actions can lead to short or long-term rewards. The goal of the algorithm is to learn an approximate model of the system under study and the actions to take which maximize both long-term and short-term rewards.
In this talk, we will discuss my current work with infrared neural stimulation (INS), a label-free targeted optical neuromodulation technique, and introduce our new toolbox, SpikerNet, which can implement reinforcement learning-based closed loop control. We will also discuss how reinforcement learning can be used in physiological studies in general. Rats in our study were implanted with 16 channel recording arrays into auditory cortex and a fiber optic stimulating “optrode” into the ventral division of the medial geniculate body. Our optical stimulation results suggest that INS is highly localized to local microcircuit stimulation and can reliably stimulate auditory thalamocortical circuits. We also will show our initial work with SpikerNet which will lead to a targeted, closed-loop neuromodulation tool.
October 1, 2020
Joshua M. Alexander, Ph.D., Associate Professor, SLHS
Validity of the peak height insertion gain (PHIG) for quantifying acoustic feedback in hearing aids
Joshua M. Alexander1, Stephanie Trippel1, Randall Wagner2, and Steve Armstrong3
1 Purdue University, West Lafayette, Indiana
2 National Institute of Standards and Technology, Gaithersburg, Maryland
3 SoundsGood Labs, Ontario, Canada
Hearing aids are commonly fit with the ear canal partially or fully open – a condition that increases the risk of acoustic feedback. By limiting usable gain, feedback limits the audiometric fitting range of a device. To guide clinical decision-making and device selection, we devised the Peak Height Insertion Gain (PHIG) method for detecting feedback spikes and spectral ripple in the insertion gain spectra derived from audio recordings. Using a manikin, 145 audio recordings of a speech sample were obtained from seven hearing aids. Each hearing aid was programmed for a moderate high-frequency hearing loss with systematic variations in the frequency response, gain, and feedback suppression; this created audio recordings that varied the presence and strength of feedback. Using subjective ratings from 13 expert judges, the presence of feedback was determined and then classified according to its temporal and tonal qualities. These classifications were used to optimize parameters of two versions of the PHIG based on a global and a local analysis of time-frequency bins. By combining the results obtained from these two PHIG methods and setting specificity to 0.95, sensitivity was ≥ 0.94 for all categories of feedback, except “infrequent.” Without compromising performance, a clinically expedient version of PHIG can be obtained using only a single measurement.
October 8, 2020
Subong Kim, Post-doctoral Associate, SLHS, SLHS
Mechanism-based approach to potential benefits of using noise reduction in hearing aids
Hearing aid (HA) users often have difficulty understanding speech in noise, and therefore noise reduction (NR) algorithms are implemented in modern HAs to attenuate background noise. However, spectral-subtraction-based NR, for instance, necessarily brings spectral distortion for speech cues that are essential for speech understanding, resulting in no clear benefit in speech intelligibility. Nevertheless, previous studies reported that attenuated noise provides cognitive benefits and increases the ease and comfort of listening. Also, HA users react differently to the trade-off between noise attenuation and spectral distortion. However, we do not know the neural mechanisms underlying NR's potential benefits and what drives the individual differences in those benefits. Here, I suggest a novel way to investigate NR benefits based on cortical dynamics of speech-in-noise processing with NR using high-density electroencephalography. First, we found that NR facilitated phonological processing across the left hemisphere dorsal-stream pathway; NR invoked stronger early responses in the supramarginal gyrus and weaker late responses in the inferior frontal gyrus compared to the no-NR condition. Second, calculating the amplitude ratio of the target word- and noise-evoked responses obtained from Heschl's gyrus allowed us to quantify individual listeners' speech unmasking ability, which predicted NR benefits. The present study takes a mechanism-based approach to HA outcomes and explores individual traits that may determine hearing intervention's success for a given listener.
December 3, 2020
Kelly Jahn, Au.D., Ph.D, Postdoctoral Fellow, Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston, MA
Age-Related Variation in the Cochlear Implant Electrode-Neuron Interface
Cochlear implants (CIs) can improve auditory perception for children and adults with severe to profound hearing loss, but little is known about how to optimize clinical interventions for individual patients. In fact, children and adults receive largely the same CI programming strategies despite their divergent hearing histories and auditory development trajectories. Since CIs interface directly with the auditory nerve, knowledge of the physiological integrity of the spiral ganglion neurons (SGNs) in individual CI users may assist in developing patient-specific programming recommendations. In a series of experiments, we demonstrate that early deafened children with CIs likely have denser populations of viable SGNs than older adults. These age-related differences in the CI electrode-to-neuron interface portend several avenues for future investigation of novel programming parameters tailored to the pediatric population.
January 21, 2021
Ed Bartlett, Ph.D., Professor, BIO & BME
Excitatory input characteristics determine temporal coding in auditory thalamus
Matthew Tharp, Brandon Coventry, Aravindakshan Parthasarathy, and Ed Bartlett
The medial geniculate body (MGB) is the primary sensory input to auditory cortex. As part of a junction between early information from sensory neurons to later cortical and eventual cognitive information, it is suspected that MGB neurons participate in a coding transformation of encoded acoustic stimuli while transmitting stimulus representations through an information processing pathway. During this transformation, neural coding characteristics transition from a time-dependent to a firing rate-dependent format. The ability of individual neurons to preserve information about stimulus features such as frequency or loudness during this transformation is uncertain, and an understanding of underlying transformation mechanisms provides insight for physiologically relevant encoding capabilities. To delineate possible transformation mechanisms, a model of rat MGB firing patterns was constructed in silico using NEURON software. Spike pattern inputs to the MGB model were based upon neural activity evoked by the presentation of various amplitude-modulated sound stimuli. Information theoretic methodology and an analysis of firing rates and vector strengths was utilized to assess various coding characteristics in firing pattern outputs from the model. Parameters were organized to represent physiological properties of either the dorsal (MGd) or ventral (MGv) region of MGB. Results indicate that, depending upon the specifications for inferior colliculus synaptic terminal conditions, the same inputs of auditory information may be represented as one of two distinct coding schemes. The corresponding physiological properties necessary for each coding scheme are found to be representative of actual physiological coding characteristics found within the MGd or the MGv. These results provide evidence for parallel pathways of information transmission within the MGB while suggesting that distinct regions of the MGB participate in divergent representations of the same auditory information through transformation processes dictated by the nature of connections from the inferior colliculus (IC).
January 28, 2021
Vibha Viswanathan, Ph.D. Student and F31 Fellow, Weldon School of Biomedical Engineering (Heinz Lab)
Effects of Masker Modulation Spectra and Fine Structure on Consonant Confusions
Prominent theories of speech intelligibility suggest that modulation masking of the energy fluctuations, or envelopes, in target speech by background noise influences perception. Consistent with this notion, our previous study showed that the spectral profile of EEG-based target-envelope coding is shaped by the masker’s envelope spectrum, and in turn predicts intelligibility across diverse backgrounds. However, this envelope coding is shaped not only by cochlear envelopes, but also by fine structure (faster stimulus fluctuations), which supports scene segregation. The present study examines whether consonant confusions further inform how the temporal information in scene acoustics shapes speech perception. Online subjects from Prolific.co performed a psychophysical consonant identification task in different masking conditions. Our results show that confusion patterns differ for maskers with different envelope spectra (after matching intelligibility), consistent with variations in modulation masking. However, confusion patterns also differ between intact and envelope-vocoded speech in babble, despite these conditions having similar masker envelope spectra. Importantly, there is a greater tendency in the vocoded condition (compared to intact) to be biased towards reporting an unvoiced consonant as being heard, which suggests that fine structure conveys voicing (consistent with its role in pitch perception). These results inform future intelligibility models and assistive listening devices (e.g., cochlear implants).
February 18, 2021
ARO prep
Ten presentations that feature Purdue-affiliated authors are scheduled to be delivered at the 44th Annual Mid-Winter Meeting of the Association for Research in Otolaryngology (ARO) between February 20th and 24th, 2021. This week's SHRP will serve as a preview of a subset of the upcoming presentations.
March 4, 2021
Hari Bharadwaj, Assistant Professor, SLHS/BME
Central gain in aging, tinnitus, and temporary hearing loss
The nervous system is known to adapt in many ways to changes in the statistics of the inputs it receives. An example of such plasticity observed in animal models is that central auditory neurons tend to retain their driven firing rate outputs despite reductions in peripheral input due to hearing loss or cochlear deafferentation. The perceptual consequences of such adaptations are unknown; pathological versions of such "central gain" are thought to contribute to tinnitus and hyperacusis. To investigate central gain in humans, we designed an electroencephalogram (EEG)-based paradigm that concurrently elicits robust separable responses from different levels of the auditory pathway. Using this measure in a cohort of middle-aged subjects with normal audiograms, we find that cortical responses are relatively invariant despite a clear monotonic decrease in auditory nerve responses with age, a result consistent with widespread age-related cochlear deafferentation and central gain. We then applied the same measures to a cohort of individuals with persistent tinnitus and to a third cohort where a week-long monaural conductive hearing loss was induced using silicone earplugs. Overall, our results suggest that central gain following reduced input is ubiquitous in humans and may have consequences for listening in complex environments.
March 11, 2021
Michael Heinz, Professor, SLHS/BME
Effects of sensorineural hearing loss on robust speech coding
Listeners with sensorineural hearing loss often struggle to understand speech even when audibility has been restored. This is especially true in noisy situations and is thought to result from suprathreshold deficits such as degraded frequency selectivity and temporal precision. This talk will review some of the progress our lab has made exploring the effects of sensorineural hearing loss on the neural coding of sounds in noise as part of our NIH-funded R01 grant. This includes results showing that in fact the temporal precision of speech coding is not diminished, but rather the strength of envelope coding can be enhanced in ways that may be detrimental for listening in noise (with inherent fluctuations itself). Also, while broadened tuning certainly does degrade speech coding in noise, the primary effects of noise-induced hearing loss on speech coding (vowels and consonants) appear to come from distorted tonotopic coding associated with degraded tip-to-tail ratios in auditory-nerve tuning. Because the degree of distorted tonotopy appears to vary with etiology (e.g., noise-induced hearing loss vs. age-related hearing loss), it is possible that this understudied mechanism may be a significant factor contributing to individual differences in speech perception across listeners, even those with similar audiograms. Finally, this talk with present ideas for future work motivated by these results.
March 25, 2021
Discussion Facilitators: Jeff Lucas, Hari Bharadwaj, and other TPAN Faculty
RCR Discussion on Rigor and Reproducibility: Power analyses, open science/pre-registration, etc.
Two of the cornerstones of science advancement are rigor in designing and performing scientific research and the ability to reproduce biomedical research findings. In response to the reproducibility crisis noted in many fields, the NIH, the NSF, and many journals are considerably ramping up their guidance, infrastructure, expectations to emphasize and incentivize practices that enhance rigor and reproducibility. This week at SHRP we will have a discussion on power analyses (facilitated by Prof. Jeff Lucas), tools and workflows for reproducible science (facilitated by Prof. Hari Bharadwaj), and any other related topics that come up in the discussion process.
April 8, 2021
Morgan Chaney, Ph.D. Student and T32 Fellow, Department of Biological Sciences (Lucas Lab)
Sexual signals under genetically constrained polymorphisms - how does the female perceive quality?
April 15, 2021
Discussion Facilitator: Michael Heinz (Professor SLHS/BME; Co-Director, TPAN)
T32 TPAN Discussion on Diversity, Equity, and Inclusion (DEI): Issues and Initiatives
The overall goal of NIH in funding Institutional Research Training Grants (T32s) is to help ensure that a diverse and highly trained workforce is available to meet the needs of the Nation’s biomedical, behavioral, and clinical research agenda. Within this framework, attention is required to recruiting and retaining trainees from diverse backgrounds, including groups underrepresented in the biomedical, clinical, behavioral, and social sciences, as described in the Notice of NIH's Interest in Diversity. This week's seminar will be a group discussion that is intended to be a starting point for regular discussions among our students and faculty to increase our understanding of DEI issues that are particularly relevant to our NIH-funded Training Program in Auditory Neuroscience (TPAN). Following a brief overview of some of the important issues for us to understand and be working on, a number of ongoing and/or planned initiatives within and beyond Purdue will be discussed as a starting point for discussions of future TPAN activities and initiatives to expand our work in this important area.
April 29, 2021
Joseph Fernandez, Ph.D. Student, BME
Mechanisms of Secondary Injury and Auditory Deficits Following Mild Blast Induced Trauma
Blast-induced hearing difficulties affect thousands of veterans and civilians each year. The long-term impact of blast exposure on the central auditory system (CAS) can last months, even years, without major external injury, and is hypothesized to contribute to many behavioral complaints associated with mild blast traumatic brain injury (bTBI). However, the mechanisms that underlie these long-term impairments are still poorly understood. Examining the acute time course and pattern of neurophysiological impairment (within the first two weeks), as well as the underlying molecular and anatomical post injury environment is therefore critical to understanding the mechanisms that lead to long-term CAS impairments. Although initial mechanical injury likely plays a role in central auditory damage, a secondary molecular mechanism of damage likely results in the chronic auditory deficits following mild bTBI. Oxidative stress, along with inflammation, have been suggested as key players in secondary molecular damage in other models of CNS injury, including other TBIs, and may underlie functional auditory deficits in mild bTBI as well. Here, we recorded the changes in a variety auditory evoked potential (AEPs) in blast-exposed and noise-control rats over the course of two months to understand regionally and temporal specific deficits. We compared these results to molecular and anatomical changes observed in immunohistochemistry staining, to understand the relationship between AEP and anatomical changes. Taken together, our results suggest that an acute cascade of (axonal) membrane damage and oxidative stress results in a temporally dependent inhibition/excitation imbalance over the course of two weeks.