Skip navigation

Seminars in Hearing Research at Purdue

 

PAST: Abstracts

Talks in 2018-2019

[LYLE 1150: 1030-1120am]

Link to Schedule


August 23, 2018

Josh Alexander (Alexander Lab)

Potential Mechanisms for Perception of Frequency-Lowered Speech

About 25% of the more than 36 million Americans with hearing loss and about 40% of all hearing aid users have at least a severe hearing impairment. These individuals have significant difficulty perceiving high-frequency speech information even with the assistance of conventional hearing aids. Frequency lowering is a special hearing aid feature that is designed to help these individuals by moving the mid- to high-frequency parts of speech to lower-frequency regions where hearing is better. This feature is offered in various forms by every major hearing aid manufacturer and it is the standard of care for children when conventional amplification fails to provide audibility of the full speech spectrum (American Academy of Audiology, 2013). However, there is a lack of strong evidence about when and how this feature should be used in the clinic. This stems from a critical knowledge gap concerning mechanisms important for the perception of frequency-lowered speech. Continued existence of this gap contributes to the lack of reproducibility of findings in this research area, suboptimal patient outcomes, and ineffective interventions.

This talk will focus on research conducted by the Experimental Amplification Research (EAR) lab on the latest commercially available method of frequency lowering, adaptive nonlinear frequency compression. This method provides unprecedented control over how sounds are remapped onto the residual capabilities of the impaired cochlea. A systematic investigation of the perceptual effects of this method in normal-hearing listeners was conducted using a variety of speech stimuli that had been processed with 8-9 different frequency-lowering settings for each of three hearing loss conditions. Auditory nerve model and acoustic analyses revealed that broadband temporal modulation accounted for 64-94% of the variance across each of the data sets. In fact, the data also revealed that current clinical recommendations for selecting frequency-lowering settings might significantly undermine potential benefit from this feature. A working hypothesis is that frequency-lowering methods and settings that preserve the greatest amount of temporal modulation from the original speech at the auditory periphery will yield the best outcomes for speech perception. Finally, this talk will discuss how the results from normal-hearing listeners compare favorably to predictions generated from auditory nerve simulations of various degrees of sensorineural hearing loss.

 

August 30, 2018

Ankita Thawani (Fekete Lab)

Zika virus pathogenesis in developing brain and inner ear

Zika virus (ZIKV) is a tropical pathogen primarily transmitted by mosquitos. Though the infected adults show only mild febrile symptoms, the vertical transmission of ZIKV from an infected pregnant mother to the fetus can result in severe congenital defects in the developing brain such as microencephaly, ventriculomegaly, calcifications, etc.
 
Various cellular and animal models present strong evidence that ZIKV preferentially infects neural progenitor cells and causes increased cell death and reduced proliferation. We addressed the question whether all the neural progenitor populations are equally infectable. Using an easily accessible embryonic chicken model, we explored ZIKV infection in early stages of rain development by injecting the neural tube at embryonic day 2. However, instead of a uniform infection all along the neuroepithelium, we found regions of heavy infection, or “hot- spots” associated with certain key regions of the brain that are known for their role of morphogen-secreting signaling centers. Morphogens are secreted cues that are essential for fate specification and patterning of the neighboring tissue in a concentration dependent manner. Our data demonstrated that upon heavy infection, not only the transcript levels for some of these morphogens were reduced, patterning defects in some dependent cell populations were also observed. Thus, while ZIKV preferentially infects neural progenitors, it also exhibits differential tropism for specific subregions of the developing brain, possibly abating their morphogenic function(s) during embryonic brain development.
 
In addition to the ZIKV infection induced brain development defects, the spectrum of congenital ZIKV syndrome has been extended to chorioretinal atrophy and sensorineural hearing loss. Stages of sensory organ development involve various neural-like cell populations that could be susceptible to ZIKV infection. Around 6% of newborns exposed prenatally to ZIKV presented with diminished otoacoustic emissions and auditory brainstem responses, hence indicating sensorineural hearing loss, perhaps originating in the cochlea. A key knowledge gap is to explore the spatial and temporal susceptibility of the developing inner ear to ZIKV infection. ZIKV injection into the chicken otic primordium in embryonic day (E)2 to 5 resulted in sensory epithelial infection in the vestibular and auditory organs frequently, with infection found in the basilar papilla (sensory cochlea) as late as E13. Non-sensory infection was also observed. The study aims to analyze short-term pathogenesis and long-term impact post ZIKV infection. We want to explore what inner ear cell types are most susceptible and prone to damage at each stage of infection.
 

September 6, 2018

Alexandra Mai (NIH T35 student research)

Beliefs Held by Parents of Infants and Toddlers with Hearing Loss

It is understood that the amount of time children wear their hearing devices and the amount of parent involvement is associated with language outcomes for children. However, device use and parent involvement are highly variable. Additionally, it is known that parents’ beliefs affect parenting actions and a child’s early cognitive development (Keels 2009). The Scale of Parental Involvement and Self-Efficacy- Revised (SPISE-R) queries parents’ beliefs, knowledge, confidence, and actions as well as their child’s device use to examine parental self-efficacy. This study focused on the beliefs section of the questionnaire. Each of the eight beliefs has a cut-off where responses past this point are considered concerning and additional counseling to the parent is recommended. The purpose of this study was to see what percent of parents held concerning beliefs, examine how children and family factors (i.e. parental education level, child’s current age, age at confirmation of the hearing loss, degree of hearing loss, and hearing device type) affected parent beliefs, and determine if a parent holding a concerning belief was associated with differences in their child’s device use or language development. This was done via an online survey made up of a demographic questionnaire, the SPISE-R, the Developmental Profile- 3 communication subscale (DP-3), and the Parenting Sense of Confidence self-efficacy subscale. Parents were also asked to submit their child’s most recent audiological results. Results indicate that a significant number of parents held concerning beliefs for all statements except two involving family and early interventionist impact. Additionally, parental education level, degree of hearing loss, age at confirmation, and current age of the child were each correlated with holding a concerning belief for one belief statement. Finally, only a concerning belief about if a child’s hearing device(s) helps him/her to communicate was associated with device use. No beliefs in the concerning range were associated with language development.
 

September 13, 2018

Prof. Lisa L. Hunter, Scientific Director of Research, Audiology, Cincinnati Children’s

High frequency hearing, otoacoustic emissions and speech-in-noise deficits due to aminoglycoside ototoxicity in cystic fibrosis

Aminoglycoside antibiotics are used world-wide to treat drug-resistant chronic lung infections. These lifesaving drugs unfortunately cause hearing loss due to ototoxicity, the effects of which progress from the base to the apex of the basilar membrane (inner ear). Therefore, in order to detect ototoxicity sooner, the higher frequency region is important to assess. This presentation will discuss extended high-frequency hearing and transient-evoked otoacoustic emissions to chirps (TEOAEs) to detect ototoxicity in pediatric patients with cystic fibrosis (CF) treated with aminoglycosides, compared to age-matched untreated controls. TEOAEs were measured using chirp stimuli at frequencies from 0.7-14.7 kHz, along with audiometry and speech-in-noise thresholds on the BKB-SIN test. Hearing thresholds were significantly poorer in the CF group than the control group at all frequencies, but particularly from 8-16 kHz, with thresholds in the CF group ranging up to 80 dB HL. Speech-in-noise performance using the BKB-SIN test was significantly poorer for the CF group compared to controls and age norms. TEOAE signal to noise ratios were significantly poorer in the CF group with significant hearing loss in the 8-10 kHz frequency regions, compared to controls without hearing loss. These results show that newly-developed chirp TEOAE measures in the extended high-frequency range are effective in detection of cochlear impacts of ototoxicity. Poorer speech-in-noise function in the group treated with aminoglycosides provides additional physiologic evidence of cochlear, and possibly neural deficits.
 
*** External Speaker sponsored by the Association for Research in Otolaryngology (ARO)
 

September 20, 2018

Brandon S Coventry (Bartlett Lab), Ph.D. Candidate Weldon School of Biomedical Engineering, Institute for Integrative Neuroscience, Center for Implantable Devices, Purdue University

Optical deep brain stimulation of the central auditory pathway

Neurological and sensory neuroprostheses based on electrical stimulation have proven effective in restoring auditory percepts in cochlear and auditory brainstem implants as well as treatment of Parkinson’s disease and Tourette’s syndrome with deep brain stimulation (DBS). However, deficits in modern devices, such as current spillover and inability to selectively target local circuits, results in undesirable auditory percepts in sensory prostheses and undesirable side effects in central nervous system implants. Infrared neural stimulation (INS) is an optical technique which has been shown to selectively stimulate nerves and neurons using long wavelength (> 1450 nm) infrared light. INS is a promising stimulation modality because it does not require genetic modification of the target, allowing translation to human patients without additional genetic manipulations. Furthermore, previous studies in nerve have suggested that INS is more spatially specific than conventional electrical stimulation. Preliminary studies in the central nervous system have suggested INS can elicit responses in cortical structures. However the efficacy of INS in generating biophysical responses in thalamocortical networks is unexplored. Demonstration of effective thalamocortical recruitment would establish INS a potential stimulation therapeutic which could theoretically improve on cochlear and brainstem implant performance. In this study, Sprague-Dawley rats of both sexes were implanted with optrodes in the medial geniculate body (MGB) in the auditory thalamus and 16 channel microwire arrays in the primary auditory cortex (A1). After recovery, auditory and infrared stimuli were presented to awake, restrained animals. Auditory stimuli consisted of click trains at sound levels between 60 and 90 dB, random spectrum stimuli with spectral contrasts of 5, 10, and 15 dB, and amplitude modulated broadband noise. Infrared stimuli operated in quasi-continuous wave with singular pulses of 0-600 mW power with varying pulse widths between 5-100 ms duration. Initial results show that infrared stimulation of MGB gives rise to repeatable and short- latency action potentials and local field potentials in the auditory cortex. Furthermore, joint- peri stimulus time histogram analysis suggests that INS acts in a spatially specific manner, recruiting only local circuits for activation. Finally, the use of INS for next generation cochlear implants and auditory brainstem/midbrain implants will be discussed.
 

September 28, 2018

Elizabeth Strickland, Professor of Speech, Language, and Hearing Sciences

Preceding sound may improve detection in a forward masking task

There are physiological mechanisms that adjust the dynamic range of the peripheral auditory system in response to sound. One of these, the medial olivocochlear reflex (MOCR) feeds back to the cochlea, and adjusts the gain in response to sound. Our research uses behavioral measures that may reflect peripheral gain, and look for evidence of a decrease in gain after preceding sound. When a signal and a masker are on at the same time (simultaneous masking), preceding sound may make the signal audible at a lower signal to masker ratio, thus improving perception. However, when the masker precedes the signal (forward masking), preceding sound has been shown to increase signal threshold, decrease frequency selectivity, and decrease suppression. While all of these effects are consistent with a decrease in gain, they all sound like bad things. In this talk, I will show a condition in forward masking where the signal is audible at a lower signal to masker ratio following preceding sound, which might be a good thing.
 

October 4, 2018

Jeffrey Lucas, Professor, Department of Biological Sciences

Using auditory information to keep eagles out of wind turbines

Golden eagles and bald eagles are known to be involved in collisions with wind turbines. This source of mortality may be an important contributor to poor population viability for golden eagles in particular. One potential technique that could be used to reduce collision rates is to identify alerting stimuli that make the turbine itself a more salient stimulus to the birds. As part of a larger project, we have recently begun to collect data on the auditory physiology of eagles with an eye to finding stimuli that are maximally alerting. We are also looking for stimuli that are minimally influenced by noise masking because the conditions around wind turbines can potentially mask certain types of sounds. We review preliminary results on bald eagles and offer some insight into what types of auditory stimuli might be useful in reducing death rates of eagles in a world where wind energy is becoming a more important source of energy for an ever-growing human population.
 

October 11, 2018

Agudemu Borjigan (Bharadwaj Lab), Ph.D. student, Weldon School of Biomedical Engineering

Investigating the Role of Temporal Fine Structure in Everyday Hearing

In challenging environments with multiple sound sources, successful listening relies on precise encoding and use of fine-grained spectro-temporal sound features. Indeed, human listeners with normal audiograms can derive substantial release from masking when there are discrepancies in the pitch or spatial location between the target and masking sounds. While the temporal fine- structure (TFS) in low-frequency sounds can convey information about both of these aspects of sound, a long standing and nuanced debate exists in the literature about the role of TFS cues in masking release in complex environments. Understanding the role of TFS in complex listening environments is important for optimizing the design of assistive devices such as cochlear implants. The long term goal of the present study is to leverage individual differences across normal-hearing listeners to address this question. As the first step, we are measuring individual TFS sensitivity via both psychophysical and electroencephalography (EEG) approaches. Preliminary data show large variance across subjects in both behavioral and EEG measures. Follow-up experiments will compare individual differences in these TFS-coding measures to speech-in-noise perception with complex maskers in co-located and spatially separated configurations to understand the role of TFS in everyday hearing.
 

October 18, 2018

Kelly L. Whiteford, Ph.D., Postdoctoral Fellow, University of Minnesota

Mechanisms for Coding Frequency Modulation

Modulations in frequency (FM) and amplitude (AM) are fundamental for human and animal communication. Humans are most sensitive to FM at low carrier frequencies (fc < ~4 kHz) when the modulation rate is slow (fm < 10 Hz), which are also the frequencies and rates most important for speech and music perception. The leading explanation for our exquisite sensitivity within this range is that slow FM is coded by precise, phase-locked spike times in the auditory nerve (time code). Low-carrier FM at faster rates and higher carriers at all rates, on the other hand, are thought to be represented by tonotopic (place) coding, based on the conversion of FM to AM via cochlear filtering. We utilized individual differences in sensitivity to a variety of psychophysical tasks, including low-carrier FM and AM at slow (fm = 1 Hz) and fast (fm = 20 Hz) modulation rates, to better understand the peripheral code for FM. Tasks were assessed across three large groups of listeners: Young, normal-hearing (NH) listeners (n=100), NH listeners varying in age (n=85), and listeners varying in degree of sensorineural hearing loss (SNHL; n=56). Results from all three groups revealed high multicollinearity amongst FM and AM tasks, even tasks thought to be coded by separate peripheral mechanisms. For normal-hearing listeners, the bulk of variability in performance appeared to be driven by non-peripheral factors. Data from listeners varying in SNHL, however, showed strong correlations between the fidelity of cochlear place coding (frequency selectivity) and FM detection at both slow and fast rates, even after controlling for audibility, age, and sensitivity to AM. Overall, the evidence suggests a unitary code for FM that relies on the conversion of FM to AM via cochlear filtering across all FM rates and carrier frequencies.
 

October 25, 2018

Erik Larsen, Ph.D., Golbarg Mehraei, Ph.D., and Ann Hickox, Ph.D., Decibel Therapeutics

Making drug therapies for hearing a reality: What Does it Take?

There are still no approved drugs for preventing or treating hearing loss available, despite a massive unmet need and the limitations of current hearing assistive devices. However, recently there has been an increasing amount of investment in new companies in the hearing therapeutics space. What is actually needed to translate scientific discoveries into actual products that can meet regulatory approval? Why haven’t pharmaceutical and biotech companies been successful so far? This talk will highlight some of these aspects using Decibel Therapeutics’ approach as an example, and includes some highlights from our research & development.

 

November 1, 2018

Miranda Skaggs and Nicole Mielnicki, Au.D. graduate students, SLHS (Strickland lab)

Behavioral measures of cochlear gain and gain reduction in listeners with normal hearing or minimal cochlear hearing loss

On the audiogram, hearing thresholds are divided into discrete categories of normal, mild, moderate, etc.  However, there is likely a continuum of hearing abilities even within the normal range. This is a continuation of a study examining the relationship between various psychoacoustic measures thought to be related to cochlear function, including gain reduction.  In the listeners tested, thresholds for long-duration tones ranged from well within the clinically normal range to just outside this range.  Where thresholds were elevated, other clinical tests were consistent with a cochlear origin.  Because the medial olivocochlear reflex (MOCR) decreases the gain of the cochlear active process in response to sound, when possible, measures were made with short stimuli.  Signal frequencies ranged from 1 to 8 kHz.  Maximum gain was estimated by measuring the threshold masker level for a masker at the signal frequency and a masker nearly an octave below the signal frequency.  One point on the lower leg of the input/output function was measured by finding the threshold masker level for a masker slightly less than one octave below the signal frequency needed to mask a signal at 5 dB SL.  Gain reduction was estimated by presenting a pink noise precursor before the signal and masker, and measuring the change in signal threshold as a function of precursor level. The relationship between these measures will be discussed.

Supported by NIH(NIDCD) R01 DC008327 (EAS), and grants from the Purdue Office of the Executive Vice President for Research and the Purdue Graduate School (WS).

Contributors: Miranda Skaggs, Nicole Mielnicki, Elizabeth Strickland, William Salloom, Hayley Morris, and Alexis Holt.

November 8, 2018 

Part 1: Rachel Ackerman (AuD student; Advisor: Lata Krishnan)

Hidden Hearing Loss - Is Music Noise to the Ears?

Noise exposure has been shown to cause cochlear synaptopathy in animal models. Studies in humans with suspected “hidden hearing loss” have shown mixed results. We evaluated auditory brainstem responses (ABR) in normal-hearing college-age students with a history of noise exposure and compared them to ABRs from musicians. Preliminary findings suggest that music and noise exposure have different physiologic effects.

Part 2: Meredith Klinker (AuD student; Advisor: Lata Krishnan)

Neural Representation of Speech in Individuals with Different Noise Tolerances

Individuals with better tolerance to noise, as indicated by their Acceptable Noise Level (ANL) score, are more likely to be successful hearing aid users. ANL scores vary greatly among individuals and are unrelated to age, gender, and hearing levels; however, little is known regarding the sources of this variability. Here we examine neural encoding of the envelope and temporal fine structure (TFS) of a speech stimulus using frequency following responses (FFR) to determine if differences in encoding may account for the variability in ANL.  FFRs were elicited using a speech stimulus presented in quiet and noise (+10 and +5 SNR) in normal hearing young adults with low and high ANL scores. Results suggest a differential susceptibility to noise between the two groups indicating that FFR measures of envelope and TFS may provide insight into the sources of variability related to noise tolerance and hearing aid performance.

 

November 29, 2018

Inyong Choi, PhD, Asst. Professor, Communication Sciences & Disorders, U. Iowa

Causal relationship between selective attention and speech unmasking during word-in-noise recognition

This presentation will introduce results from two recent studies in normal-hearing listeners' speech-in-noise understanding: What cognitive factors explain its variability and how. The first study shows that speech unmasking, revealed by the amplitude ratio between cortical auditory evoked responses to target sound and noise, predicts accuracy and processing speed during a word-in-noise recognition task. Individual differences in speech unmasking were thought to be related to auditory selective attention process, which enhances the strength of neural responses to attended sounds while suppresses the neural responses to ignored sounds. In the second study, we tested whether training of selective attention can improve speech unmasking, which in turn improves the accuracy of word-in-noise recognition. During training, subjects were asked to attend to one of two simultaneous but asynchronous auditory streams. For the participants assigned to the experimental group, visual feedback was provided after each trial to demonstrate whether their attention was correctly decoded from their single-trial EEG response. The experimental group participants with four weeks of this neuro-feedback training exhibited amplified cortical evoked responses to target speech as well as improved word-in-noise recognition, while the placebo group participants did not show consistent improvement. This result presents the causal relationship between auditory selective attention and speech-in-noise performance.

 

December 6, 2018

Ravinderjit Singh (Bharadwaj and Sayles Lab), TPAN T32 Fellow and MD/Ph.D. student, Weldon School of Biomedical Engineering

Neural Sensitivity to Dynamic Binaural Cues: human EEG and chinchilla single-unit responses

Animals encounter dynamic binaural information in broadband sounds such as speech and background noise. These dynamic cues can result from: 1) moving sound sources, 2) self-motion, or 3) reverberation. Two dynamic binaural cues that are investigated with this work are inter-aural time delay (ITD) and inter-aural correlation (IAC). Most studies investigating ITD or IAC sensitivity have used static or sinusoidally varying inter-aural signals while neural sensitivity to change in ITD or IAC is rarely systematically addressed. 

We are using a systems-identification technique to characterize neural responses to dynamics of changing ITD and IAC in broadband sounds. We use a maximum length sequence (MLS) to modulate either the ITD or IAC of a broadband noise carrier. Neural responses are recorded from humans using electroencephalography (EEG) and from auditory nerve fibers (ANFs) in terminally anesthetized chinchillas. Using the responses from ANFs, responses from a higher order brainstem structure, the medial superior olive (MSO), are simulated. Human behavioral data is also obtained to determine the upper limits of human detection of dynamic IAC and to quantify how thresholds for target detection in noise vary with IAC dynamics. 

​Results thus far show that transfer functions from the MSO (simulated from ANF responses), are low pass with corner frequencies in the range of hundreds of Hz. In contrast, EEG-based transfer functions, presumably reflecting cortical responses, were also low pass, but with corner frequencies in the range of tens of Hz. Preliminary human behavioral results will also be presented.

 

January 10, 2019

Faculty Discussion Leaders: Ed Bartlett, Ximena Bernal, Donna Fekete, Michael Heinz, Jeff Lucas, Mark Sayles

Responsible Conduct of Research (RCR) - Faculty-led Discussion on Animal Research 

The first spring SHRP "presentation" will be a responsible conduct of research (RCR) discussion on animal use, ethics, and protocols. Please come and participate in the discussion on this important topic. Note that if you are funded through  a federal training grant, attending this session may be a requirement.

 

January 17, 2019

Vibha Viswanathan (Heinz Lab), TPAN F31 Fellow, Weldon School of Biomedical Engineering

Evaluating Human Neural Envelope Coding as the Basis of Speech Intelligibility in Noise

Models of speech intelligibility that accurately reflect human listening performance across a broad range of background-noise conditions could be clinically important (e.g., for deriving hearing-aid prescriptions, and optimizing cochlear-implant signal processing). A leading hypothesis in the field is that internal representations of envelope information ultimately determine intelligibility. However, this hypothesis has not been tested neurophysiologically. Here, we address this gap by combining human electroencephalography (EEG) with simultaneous perceptual intelligibility measurements. First, we derive a neural envelope-coding metric (ENVneural) from EEG responses to speech in multiple levels of stationary noise, and identify a mapping between the neural metric and corresponding speech intelligibility. Then, using the same mapping, we use only EEG measurements to test whether ENVneural is predictive of speech intelligibility in novel background-noise conditions and in the presence of linear and non-linear distortions. Preliminary results suggest that neural envelope coding can predict speech intelligibility to varying degrees for different realistic listening conditions. These results inform modeling approaches based on neural coding of envelopes, and may lead to the future development of physiological measures for characterizing individual differences in speech-in-noise perceptual abilities. Work Supported by NIH(NIDCD) F31 DC017381 (VV).
 

January 24, 2019

Prof. Ruth Litovsky, Communication Sciences & Disorders / Otolaryngology, U. Wisconsin - Madison

Restoring binaural and spatial hearing in cochlear implant users

Patients with bilateral deafness are eligible to receive bilateral cochlear implants (BiCIs), and in some countries, patients who suffer from single-sided deafness are receiving a cochlear implant (SSD-CI) in the deaf ear. In both the BiCI and SSD-CI populations there is a potential benefit from the integration of inputs arriving from both ears. One of the demonstrated benefits is improved sound localization. To understand the factors that are important for binaural integration we use research processors, delivering pulsatile stimulation to multiple binaural pairs of electrodes. Our novel stimulation paradigms are designed to restore both binaural sensitivity and speech understanding. A second known benefit is improved ability to segregate speech from background noise or competing maskers. Our recent studies are aimed at measuring both release from masking and release from cognitive load. In these studies, we use real-time pupil dilation as a means to assess listening effort while subjects listen to speech stimuli.  We are interested in the extent to which bilateral hearing in BiCI patients, and in SSD-CI promote release from masking, and the corresponding cognitive load. By understanding the cost/benefit of integrating inputs to two ears, a more complete picture of the advantages of bilateral stimulation can emerge.
 
Work funded by grants from NIH-NIDCD, with partial support for the SSD-CI study from MED-EL
 
*** External Speaker sponsored by the Association for Research in Otolaryngology (ARO)
 

January 31, 2019

Satyabrata (Satya) Parida (Heinz Lab), Ph.D. student, Weldon School of Biomedical Engineering

Effects of noise-induced hearing loss on speech-in-noise envelope coding: Inferences from single-unit and non-invasive measures in animals

Speech-intelligibility models (SIM) can be used for systematic fitting of hearing-aids and cochlear-implants, potentially improving clinical outcomes in noisy environments. Existing SIMs are suitable for predicting the performance of normal hearing subjects, but not for hearing impaired subjects due to our limited understanding of the effects of cochlear hearing impairment on speech and speech-in-noise coding. In order to address this gap, we collected auditory nerve (AN) single unit responses and envelope following responses (EFR) in normal- and hearing-impaired chinchillas to speech (sentence), spectrally-matched stationary noise, and noisy-speech mixtures. EFRs show evidence for degraded tonotopic coding, as observed in single unit responses (e.g., Henry et. al. J. Neurosci.-2016). In particular, the hearing impaired group is more susceptible to masking of medium frequency (.5-3 kHz) information by low frequency (<500 Hz) carrier energy. Our data also show an increased correlation between AN-fiber response envelopes of noisy-speech and noise-alone for hearing-impaired fibers in speech-relevant modulation frequency bands, suggesting a greater degree of distraction from inherent envelope fluctuations following cochlear hearing loss. This novel finding is significant given the emphasis recent SIMs (e.g., Jørgensen and Dau, JASA-2011) have placed on the importance of comparing inherent noise envelope fluctuations to speech envelope-coding fidelity in predicting noisy-speech perception. A future direction will be to develop SIMs based on our neuro- and electrophysiological data. Work supported by an International Project Grant from Action on Hearing Loss (UK).
 
 

February 7, 2019

A sampling of upcoming external presentations by the Purdue hearing science community

Speakers* & titles: 

  Prof. Lata Krishnan (SLHS): "Newborn Hearing Screening: Early Education = More Satisfied Mothers" (10 mins) 

  Emily Han (BIOL): "Auditory Processing Deficits Correspond to Secondary Injuries along the Auditory Pathway Following Mild Blast Induced Trauma" (10 mins) 

  Kelsey Dougherty (SLHS) & Hannah Ginsberg (BME): "Non-Invasive Assays of Cochlear Synaptopathy in Humans and Chinchillas" (10 mins) 

  Agudemu orjigan (BME): "Individual Differences in Spatial Hearing may arise from Monaural Factors" (5 mins) 

  Ravinderjit Singh (BME): "Neural Sensitivity to Dynamic Binaural Cues: Human EEG and Chinchilla Single-Unit responses" (5 mins)

  Vibha Viswanathan (BME): "Neurophysiological Evaluation of Envelope-based Models of Speech Intelligibility" (2 mins)

  Satyabrata Parida (BME): "Effects of Noise-Induced Hearing Loss on Speech-In-Noise Envelope Coding" (2 mins)

  *Note: Only the names of the author(s) presenting here is(are) listed.

 

February 14, 2019

Hari Bharadwaj, Asst. Professor of Speech, Language, and Hearing Sciences and Biomedical Engineering

​Assays of supra threshold hearing: Integrative non-invasive windows into processes throughout the auditory system

Everyday noisy environments with multiple sound sources place tremendous demands on the auditory system. Successful listening such environments relies on the interplay between early processes along the auditory pathway that encode the acoustic information, automatic processes throughout the auditory system that organize the encoded information, and cognitive processes such as selective attention that aid in processing of target information while ignoring irrelevant sources. Consequently, to understand an individual's performance in such complex listening tasks, it is important to integratively study the auditory system at multiple levels. Here, we illustrate non-invasive approaches that can probe different processes along the auditory pathway with applications to our understanding of suprathreshold hearing in three populations: middle-aged individuals, children with autism spectrum disorders, and young normal-hearing individuals with no hearing complaints but with widely varying performance in selective listening tasks.
 

February 21, 2019

Alexander Francis, Assoc. Professor of Speech, Language, and Hearing Sciences

Noise, hearing impairment, and health: the attention/effort/annoyance connection

Noise is a significant source of annoyance and distress and is increasingly recognized as a major public health issue in Europe and around the world. Workplace noise impairs job performance and increases fatigue and susceptibility to chronic disease.  Noise is one of the top reasons given for abandoning a hearing aid. People who work in noise and people with hearing impairment are both at greater risk for cardiovascular diseases commonly associated with stress. Background noise may also be particularly troublesome for individuals with tinnitus, hyperacusis, or misophonia, all of which appear to involve atypical attentional and emotional responses to auditory stimuli. Even in non-clinical populations, sensitivity to noise varies considerably, with 20-40% of individuals reporting some sensitivity and 12% reporting high sensitivity. We hypothesize that both hearing impairment and background noise cause annoyance when irrelevant sounds interfere with task performance, e.g. through distraction and/or increased listening effort. Chronic annoyance, in turn, may induce physiological stress responses that damage long-term health. However, the effect of background noise and/or hearing impairment on long-term health may vary depending on individual differences in information processing (cognitive capacity), susceptibility to distraction (selective attention), and noise sensitivity or emotional responsivity (affective psychophysiology). In this talk I will discuss some recent studies we have been running within the context of a new research program to investigate individual differences in cognitive and affective responses to noise, and to develop objectively quantifiable measurements of psychophysiological response to noise-that could eventually be obtained through inexpensive wearable devices. 
 

February 28, 2019

Matthew Tharp (Bartlett Lab), Weldon School of Biomedical Engineering

Alternative coding characteristics in the medial geniculate body formed by collicular terminal conditions

The medial geniculate body (MGB) is the primary sensory input to auditory cortex. As part of a junction between sensory and cortical neuronal populations, it is suspected that MGB neurons participate in a “coding transformation” of encoded acoustic stimuli. During this transformation, neural coding characteristics may transition from a time-dependent to a rate-dependent format. The ability of single neurons to preserve information about stimulus features such as frequency or loudness during a coding transformation is uncertain, and an understanding of the mechanisms behind a transformation provides insight for physiologically relevant encoding capabilities. To delineate possible transformation mechanisms, a model of rat MGB firing patterns was constructed in silico using NEURON software. Spike pattern inputs to MGB models were based upon neural activity evoked by the presentation of various amplitude-modulated sound stimuli, and resulting MGB output firing patterns were assessed. In this study, three metrics (information entropy, firing rate, and vector strength) are utilized to assess coding characteristics in mathematical models of MGB neurons. Model parameters were organized to represent physiological properties of either the dorsal (MGd) or ventral (MGv) region of MGB, and the relationships between information entropy, firing rate, and vector strength were observed for different stimulus frequencies. Results indicate that, depending upon the structure of inferior colliculus synaptic terminals within simulations, the same inputs of auditory information may be represented as one of two largely different coding schemes, and the corresponding physiological properties necessary for each distinct coding scheme are representative of actual physiological properties found within the MGd or the MGv. These results provide evidence for parallel pathways of information transmission within the MGB while suggesting that distinct regions of the MGB participate in divergent representations of the same auditory information.
 

March 7, 2019

William Salloom, Ph.D. Candidate, PULSe program (Strickland Lab), SLHS

Physiological and Psychoacoustic Measures of Two Different Auditory Efferent Systems

The human auditory pathway has two efferent systems that can adjust our ears to incoming sound at the periphery. One system, known as the middle-ear muscle reflex (MEMR), causes contraction of the muscles in the middle-ear in response to loud sound, and decreases transmission of energy to the cochlea. A second system, known as the medial olivocochlear reflex (MOCR), decreases amplification by the outer hair cells in the cochlea. While these systems have been studied in humans and animals for decades, their functional roles are still under debate, especially their roles in auditory perception. The MOCR is thought to start being active at lower sound levels, and seems to have an effect across the frequency range, whereas the MEMR has been thought to be activated at higher sound levels and mainly affect low frequencies. The present study proposes to analyze these systems in more detail using physiological measures, and to measure perception using the same stimuli. We hypothesize that these systems may actively adjust the dynamic range in response to incoming sound so that we are able to perceive information-bearing contrasts.

 

March 21, 2018

Michael Heinz, Professor, Dept. of Speech, Language, and Hearing Sciences and Weldon School of Biomedical Engineering

Physiological and Behavioral Assays of Cochlear Synaptopathy in Chinchillas

Moderate-level noise exposure can eliminate cochlear synapses without permanently damaging hair cells or elevating auditory thresholds in animals. Cochlear synaptopathy has been hypothesized to contribute to human perceptual difficulties in noise that can be observed even with normal audiograms. However, it is difficult to test this hypothesis because of 1) ethical limits in measuring human synaptopathy directly, and 2) synaptopathy has been most completely characterized in rodent models for which behavioral measures at speech frequencies are challenging. We recently established a relevant mammalian behavioral model by showing that chinchillas have corresponding neural and behavioral amplitude-modulation (AM) detection thresholds in line with human thresholds. Furthermore, immunofluorescence histology confirmed synaptopathy occurs in chinchillas across a broad frequency range, including speech frequencies, following a lower-frequency noise exposure that avoids permanent changes in ABR thresholds and DPOAE amplitudes. Auditory-nerve fiber responses showed that low-SR fibers were reduced in percentage (but not eliminated) following noise exposure, as in guinea pigs. Non-invasive wideband middle‐ear muscle-reflex (MEMR) assays in awake chinchillas showed large and consistent reductions in suprathreshold amplitudes following noise exposure, whereas suprathreshold ABR wave-1 amplitude reductions were less consistent. The relative diagnostic strengths of MEMR and ABR assays were consistent with parallel studies of noise-exposed and middle-aged humans. Behavioral assays of tonal-carrier AM detection in chinchillas before and after noise exposure found no significant performance degradation, suggesting more complex stimuli that provide a greater challenge to population neural coding may be required. These anatomical, physiological, and behavioral data illustrate a valuable animal model for linking physiological and perceptual effects of hearing loss. Funding: R01DC009838 (Heinz) and NIH R01DC015989 (Bharadwaj).

 

March 28, 2019

Andres Llico Gallardo, Ph.D. Student  (Talavage Lab), Weldon School of Biomedical Engineering

Enhanced Speech Perception Using a Physiologically-based Cochlear Implant Stimulation – Preliminary Results

Cochlear implants (CIs) are electronic devices capable of partially restoring hearing loss, a prevalent and disabling condition in the United States. Using electrical stimulation, CIs bypass the peripheral auditory system by directly delivering electrical stimulation patterns to the auditory nerve intended to replicate the outcomes of normal hearing. Modern stimulation patterns have generally been developed following a phenomenological approach instead of being derived from known physiological function of the auditory system. However, physiological models are usually complex and computationally expensive, creating a trade-off between accuracy and performance and thereby limiting their use for practical applications. Using a computational model of the auditory system, we have developed an optimization framework to solve the inverse problem to find the optimal stimulation sequence that generates the desired pattern for a CI user. This optimized sequence is the result of comparing neural activity patterns from the computational model of normal hearing and a CI simulator. Experiments included the presentation of phonetically balanced and HvD words to a post-lingually deaf subject in noise and quiet conditions. Preliminary results have shown significant improvements in speech perception tests under noise conditions, and more consistency between trials compared to traditional stimulation strategies. The proposed framework can be used as a ground truth for future improvements in either hardware or stimulation strategies. However, further research must be conducted to its adaptation for real-time applications.
 
 

April 4, 2019

Matthew J. Thompson, TPAN T32 Felllow (Umulis Lab), Weldon School of Biomedical Engineering

Decoding morphogen signaling in the developing organ of Corti

The mature organ of Corti (OC) demonstrates meticulous spatial organization in cellular patterning with positional accuracy less than one cell diameter. This patterning emerges from a prosensory epithelium in response to spatiotemporal molecular cues known as morphogens. Many morphogens active during OC development have been identified, though their precise roles in spatial patterning are not yet fully characterized. It is hypothesized that a regulatory network involving Bmp4, canonical Wnt/ß-catenin, and Jag1-Notch pathways active during E11.5 and E12.5 principally refine the boundaries of the sensory epithelium, setting the stage for supporting and hair cell differentiation beginning at E13.5 as the active network topology evolves. To investigate these signals, semiquantitative confocal imaging is used to extract numeric data for spatial profiles at E12.5, when morphogen signaling levels are strongest prior to differentiation. These data are analyzed using information theoretic approaches to determine the amount of positional information provided along the medial-lateral domain, interpreted by cells as positional identity which is used for cell fate determination. Additionally, the profiles are to be used to calibrate and investigate mechanistic reaction-diffusion models and test hypotheses on network topology and morphogen transport.
 

April 11, 2019

Ravinderjit Singh (Bharadwaj and Sayles Lab), TPAN T32 Fellow and MD/Ph.D. student, Weldon School of Biomedical Engineering

Neural sensitivity to dynamic binaural cues: Human EEG and chinchilla single-unit responses

This will be a preview of a Ravinderjit's upcoming presentation at ASA in May.

Animals encounter dynamic binaural timing information in broadband sounds such as speech and background noise due to moving sound sources, self motion, or reverberation. Most physiological studies of interaural time delay (ITD) or interaural correlation (IAC) sensitivity have used static stimuli; neural sensitivity to dynamic ITD and IAC is rarely systematically addressed. We used a system-identification approach using maximum-length sequences (MLS) to characterize neural responses to dynamically changing ITDs and IACs in broadband sounds. Responses were recorded from humans (electroencephalogram; EEG) and from single neurons in terminally anesthetized chinchillas (auditory nerve fibers; ANFs). Chinchilla medial superior olive (MSO) responses were simulated based on binaural coincidence from recorded ANF spike times in response to left- and right-channel input. Estimated ITD and IAC transfer functions were low-pass, with corner frequencies in the range of  hundreds of Hz. Human EEG-based transfer functions, likely reflecting cortical responses, were also low-pass, but with much lower corner frequencies in the region of tens of Hz. Human behavioral detection of dynamic IAC extended beyond 100 Hz consistent with the higher brainstem limits.  On the other hand, binaural unmasking effects were only evident for low-frequency ITD/IAC dynamics in the masking noise. This suggests that subcortically coded fast dynamic cues are perceptually accessible and may support detection, whereas cortical limits may be reflected in whether cues can be utilized for binaural unmasking.

 

April 18, 2019

Hari Bharadwaj, (Asst. Prof., SLHS/BME) and Kelsey Dougherty (AuD student, SLHS)

Characterizing "central gain" following reduced peripheral drive in the human auditory system

The nervous system is known to adapt in many ways to changes in the statistics of the inputs it receives. A prominent example of such brain plasticity that is observed in animal models is that central auditory neurons tend to retain their firing rate outputs at roughly a constant level despite reductions in peripheral input due to hearing loss. This "central gain" is thought to come about by down-regulation of inhibitory neurotransmission. Pathological versions of such central gain are thought to underlie disorders such as tinnitus and hyperacusis. Separately, animal models of aging also show down-regulation of inhibition throughout the auditory system -- the extent to which peripheral loss contributes to such age-related changes is unknown. This presentation will describe the approach taken by our lab to characterize central gain in humans, including tinnitus sufferers, using EEG and perceptual experiments. Preliminary results will be presented with a goal of obtaining unvarnished feedback.
 

April 25, 2019

Edward Bartlett (Professor, BIO/BME) and Marisa Dowling (MS student, BME)

Effects of Training and Corticofugal Modulation on Startle Behavior and Auditory Physiology

As languages utilize noisy time varying frequency trajectories, also known as pitch sweeps, to convey speech information, it is important to understand the impacts of training as a potential means to preserve noisy pitch sweep processing with aging, for example. In addition, it is unclear which circuit elements may mediate training-related changes. We hypothesize that top-down feedback via corticocollicular modulation may be involved. Neuromodulation, specifically pharmacogenetics, can be used to inhibit the neural pathway of interest to determine the significance of its role in complex speech processing. The consequences of training and A1-IC pathway inhibition were investigated using behavioral and electrophysiological measurements. Behavioral noisy time varying pitch sweep discrimination abilities were measured using prepulse inhibition of acoustic startle response. Electrophysiological measurements of brain activities were assessed by envelope following responses (EFR).