Skip navigation

Seminars in Hearing Research at Purdue

 

Abstracts

Talks in 2019-2020

[LYLE 1150: 1030-1120am]

Link to Schedule


August 29, 2019

Elizabeth Strickland, Professor of Speech, Language, and Hearing Sciences

Changing the Channel

This past summer there was a Knowles Symposium in honor of Dave Green, a giant in the field of psychoacoustics. I will give the talk I gave at that symposium, and also share some details about the symposium. Below is my abstract for the symposium.

A great deal of early psychoacoustic research focused on the importance of a critical band, which might reflect a single channel in the auditory system. A surprising finding by Green and colleagues in the 1980s was that performance on intensity discrimination tasks improved if components were added which should be well outside the critical band. This type of task was called “profile analysis”, and showed that listeners seemed to be comparing information across channels to make decisions. In one of the early papers on profile analysis [Green, Mason, and Kidd; JASA 75, 1163-1167, 1984] Green and colleagues found that thresholds improved with duration of the profile. One hypothesis that they explored was the idea that the system had some type of “automatic gain control” that took about 50 ms to activate. Although this hypothesis did not seem to fit the data pattern in that paper, the idea of gain adjustment within a channel in response to sound has since received considerable attention. This talk will review evidence for within-channel gain adjustment.

 

September 5, 2019

Emily X. Han, Ph.D. Candidate, BIO

Co-Authors: Joseph Fernandez, Riyi Shi, Edward L. Bartlett

Auditory Processing Deficits Correspond to Secondary Injuries along the Auditory Pathway Following Mild Blast Induced Trauma

Background

The long-term impact of blast exposure on the central auditory system (CAS) can last months, even years, without major external injury, and is hypothesized to contribute to many behavioural complaints associated with mild blast traumatic brain injury (bTBI). Our group has previously documented the short-term and longer-term effects of acute blast and non-blast acoustic impulse trauma on auditory brainstem response (ABR) and sinusoidally amplitude modulated (AM) carriers in adult rats. However, the mechanisms that underlie these long-term impairments are still poorly understood. Although initial mechanical injury likely plays a role in central auditory damage, a secondary molecular mechanism of damage likely results in the chronic auditory deficits following mild bTBI.

Methods

We recorded the changes in ABR and auditory evoked potential (AEP) response to AM and speech-like stimuli (iterated rippled noise pitch contours) in blast-exposed and control rats over two months. Acute single-unit recording was also made to assist the observation. Subcortical auditory regions of interest were stained for tetramethylrhodamine (TMR) and immunohistochemistry to label GAD67, and in situ acrolein imaging, to test for axonal damage, changes in inhibition, and oxidative stress, respectively.

Results

Preliminary results suggest axonal damage in ventral auditory tracts, increased GAD67 in IC, and increased acrolein in SOC 48 hours, post-blast. The wave 1/4 ratio changes from an initial increase and progressing to a significant decrease in the ratio 30-60 days post blast. Although changes in AM processing are limited, IRN presentation in the subcortical auditory system is significantly disrupted two weeks post blast, with suggestive evidence that the disruption persists even at one month.

Conclusions

Our results suggest that a cascade of (axonal) membrane damage, oxidative stress, and excitatory/inhibitory imbalance contributes to blast-induced subcortical CAS impairments.

 

September 12, 2019

Nabilah Sammudin and Vashi Negi (BIO, Fekete Lab)

Elucidating Zika Virus Pathogenesis in the Developing Chicken Inner Ear

Co-authors: Ankita Thawani, Vidhya Munnamalai, Hannah Reygaerts, Alexis Wozniak, Richard J. Kuhn, Donna M. Fekete

Congenital Zika Syndrome is a disorder that affects a subset of newborns who are exposed to Zika virus (ZIKV) during gestation. Microcephaly is the most notable birth defect, although sensorineural hearing loss is a co-morbid outcome in ~7% of affected babies, as shown by diminished otoacoustic emissions and auditory brainstem responses. These results suggest that at least some of the ZIKV-mediated hearing loss may originate within the cochlea and/or the auditory nerve. To explore the timing and consequences of exposing the developing cochlea to ZIKV, and to understand the mechanism underlying its susceptibility to ZIKV, we are using the chicken embryo as an accessible model organism.

ZIKV is delivered into the otocyst at embryonic days (E)2 to 5 and an antibody against double-stranded RNA virus is used to detect replicating virus at 2 to 8 days post infection (dpi). These time windows span the period before, during and after the generation of auditory neurons and sensory hair cells in the chicken cochlea. Results show that ZIKV levels in the auditory ganglion decrease over time, while levels in the sensory cochlea increase over time. In the brain, ZIKV infection rapidly leads to decreased cell proliferation and increased cell death among neural progenitors. We found evidence of robust cell death in heavily-infected auditory ganglia at 2dpi, followed by dramatic reductions in ganglion size at ~7dpi in some samples. The sensory cochlea shows only modest amounts of cell death for a few days after infection, and rarely has any overt effects on morphology. These data suggest a complex interplay between critical periods of virus susceptibility and the incidence of cell death in the auditory periphery.

We next asked whether our observations about virus susceptibility in the chicken embryo can be correlated to the spatiotemporal expression of possible ZIKV receptors identified in other species (mouse and human). Specifically, previous work identified the transmembrane protein Axl (a member of TAM protein tyrosine kinases) as a candidate entry factor in human glial cells. However, no association has been established in-vivo, possibly because multiple factors can serve as ZIKV receptors across a range of host cells and tissues. Since Axl does not exist in the chicken genome we looked at another TAM receptor, Tyro3, that has 40% homology with human Axl. Our in-situ hybridization results show that there is an overlap between ZIKV infection hotspots and Tyro3 expression patterns in the chicken brain and inner ear from E4 to E10. Further experiments are planned to explore the relationship between Tyro3 and ZIKV infection and to understand the role of Tyro3 in ZIKV pathogenesis.

Supported by NIDCD R21DC016732 (to D.M.F) and Purdue University.

 

September 19, 2019

Kristina DeRoy Milvae, Ph.D., Postdoctoral Associate, Hearing and Speech Sciences, U. Maryland

The role of ear asymmetry in cochlear-implant listening performance and effort

Cochlear implants (CIs) are auditory prostheses used to treat hearing losses so severe that hearing aids provide limited benefits. Adults are often implanted in one ear, but bilateral implantation is becoming more common. Bilateral implantation provides an opportunity for two-ear benefits, such as improved speech understanding in noisy environments. However, implantation is often sequential and may result in functional differences across ears (asymmetries), possibly related to differences in effective spectral resolution. Asymmetries may limit binaural benefits and make listening more cognitively demanding or effortful. A series of experiments will be discussed that examine the role of ear asymmetry in binaural performance and effort. Speech recognition was measured with competing speech in paradigms such as dichotic listening. Pupillometry was used as an index of listening effort. By studying both listening performance and the cognitive resources involved, we can gain a more complete understanding of cognitive demands to CI listening than with behavioral measures alone.

 

September 26, 2019

Agudemu Borjigin, Ph.D. student, Weldon School of Biomedical Engineering

The Biology of the Inner Ear Course

The Biology of the Inner Ear course (BIE) is a 3-week training program that teaches advanced research approaches to the development, function, pathology of the inner ear and downstream auditory and vestibular pathways in the central nervous system. The BIE provides extraordinary opportunities for student-faculty interactions. This year, a total of 48 expert faculty along with 6 teaching assistants introduced a class of 17 students to the fundamentals of the auditory and vestibular system through lectures, tutorials, research seminars as well as side-by-side guidance for laboratory exercises. Agudemu attended the course this year, and will share highlights from individual and team projects that the students worked on during independent laboratory time throughout the coursework. 

 

October 3, 2019

Jeffrey R. Lucas, Ph.D., Professor, BIOL

The geographical properties of chickadee song: our birds are truly weird

There are two chickadee species in Indiana: black-capped north of South Bend, and Carolina everywhere else. Black capped chickadee song is a two-part song with a glissando followed by a pure tone. The glissando frequency is variable within individuals, but the ratio between the glissando and pure-tone frequencies tend to be fixed. Male quality is indicated by how variable the ratio is for all the songs a male sings. Little is known about similar properties of Carolina chickadee (CACH) song, though data published to date on CACH song recorded along the east coast from Pennsylvania to South Carolina suggests that there is no selection on the ratio and no tight transposition of frequencies, a conclusion backed up by playback studies. However, pitch sifts may be important in Indiana CACH song. We quantified song properties from 7 sites across Indiana. Indiana song appears to be truly weird relative to CACH song recorded throughout the rest of the geographical range of the species. CACH song is typically 4 notes with some dialectal variation. Song repertoires differed among the 7 sites and the role of frequency in song production also appears to be population/culture specific. Simple pitch shifts in only part of the song were observed for the most common song type in 3 sites. True frequency transposition was found in 2 other sites. Songs of birds from the 6th site are more variable and lacked true pitch shifts. Songs from the 7th site indicate a continuous transposition of frequencies. Thus the role of song frequency, along with song syntax, in song complexity appears to be subject to local levels of cultural evolution.

 

October 10, 2019

Krishna Jayant, Ph.D., Asst. Professor, Weldon School of Biomedical Engineering

From synapse to soma: nanoelectrode electrophysiology for mapping brain activity across scales

Dendritic spines, characterized by a small head (volume~0.01-0.1μm3) and narrow neck (diameter~0.1μm, length~1μm), are the primary site of excitatory synaptic input in the mammalian brain. Synaptic inputs made onto spines first integrate onto dendrites, and subsequently propagate towards the soma and axon initial segment, where they further integrate with other inputs to determine overall action potential output. Elucidating the electrical properties of spines is thus paramount for understanding the first steps along this signal processing chain. Yet, their micron/sub-micron size has rendered conventional whole-cell intracellular electrophysiology infeasible. In the first part of this talk, I will introduce quantum-dot labeled quartz nanopipettes (15-30 nm diameters) which under two-photon visualization enables targeted intracellular recordings from spines1and  small pre-synaptic terminals. I will show through detailed experiments that (i) spines receive large EPSPs (25-30mV), and (ii) estimated neck resistances are large enough to influence electrical isolation (mean ~420 MΩ), and filter synaptic input as it invades the dendrite. I will then briefly describe the theoretical implications of these properties2. In the second part of this talk, I will describe a new method in which I combine the flexible property of these nanopipettes with microprisms to enable simultaneous two-photon calcium imaging and targeted intracellular electrophysiology across (a) cortical depth3; (b) different cell types; (c) somatic and dendritic segments; and (d) anaesthetized and awake head-fixed locomoting mice. As a prototypical application of the method, I will describe targeted intracellular recordings from PV+ interneurons, while simultaneously imaging epileptic seizure spread across cortical layers. I will also give a succinct overview of recent work on biomimetic nanopipettes, scanning nanopipette imaging, custom CMOS intracellular amplifiers4, and the fabrication of vertical silicon nanoelectrodes. Finally, I will conclude by describing some recent endeavors at Purdue on nanoelectrode technologies, to decipher dendritic mechanisms underlying the sense of touch.

    

Bio: Krishna Jayant received his B.Tech degree in electrical engineering from the National Institute of Technology (NIT) Tiruchirappalli, India in 2005, where as part of his bachelor’s thesis he worked on bio-inspired optimization techniques. After brief research stints at IISc Bangalore (2005-2006), and the University of Bologna (2006-2007), both in the area of microelectronics, he joined Cornell University, Ithaca NY, where he received his M.S/PhD in electrical engineering in 2014, working with Prof. Edwin C. Kan. His PhD thesis focused on CMOS floating gate transistors as interfaces to cells and biomolecules. He was the Kavli Post-Doctoral Fellow (awarded twice in two consecutive years) at Columbia University, New York, NY, working with Profs. Rafael Yuste, Ken Shepard, and Ozgur Sahin in the field of neuroscience and CMOS integrated neurotechnology. His laboratory at Purdue works on topics spanning nanoelectronics, CMOS integrated systems, biophysics, and neuroscience.

Jayant, K. et al. Targeted intracellular voltage recordings from dendritic spines using quantum-dot-coated nanopipettes. Nat Nano, (2017).

2  Thibault Lagache, Krishna Jayant & Rafael Yuste,. Electrodiffusion modelling of spine voltage dynamics. Under Review (2017)

Jayant, K. et al. Flexible nanopipettes for motion-insensitive intracellular electrophysiology in vivo. Cell Reports (2019).

4. Shekar, S., Jayant, K., et al, A miniaturized multi-clamp CMOS amplifier for intracellular neural recording, Nature Electronics, 2019

 

October 17, 2019

Karolina K. Charaziak, Ph.D., Postdoctoral Associate, Auditory Research Center, Caruso Department of Otolaryngology, University of Southern California

The more the merrier: New place-specific sources of cochlear microphonics

The cochlear microphonic (CM) constitutes a vector sum of electrical potentials produced by outer hair cells (OHCs). Because CM can be measured with a far-field electrode placed in the vicinity of the cochlea, e.g., in the ear canal or at the round window, it may be useful for diagnosing OHCs impairments—such impairments are one of the most common causes of sensory hearing loss. However, CM measured with a far-field electrode suffers from lack of place-specificity, meaning that CM is not useful for pointing to specific cochlear regions with impaired OHCs. This lack of place specificity is explained by a classic view of CM generation where CM potentials are dominated by contributions from cellular sources located at the basal, high-frequency end of the cochlea, regardless of stimulus frequency. As a result, CM amplitude is expected to vary little with the stimulus frequency. However, contrary to this prediction, we showed that chinchillas’ CMs demonstrate striking rippling patterns when measured in response to low-level tones varied in fine-frequency steps. We propose that the ripples arise through an interference between CM components arriving at the recording electrode with different latencies allowing for either constructive or destructive summation. In this talk I will review a phenomenological model of a chinchilla cochlea that predicts existence of such additional CM components. According to the model, these additional CM components originate near the tonotopic place of the stimulus in the cochlea. We validate that theory by introducing test conditions that affect cochlear processing only near the tonotopic place of the stimulus. For such conditions we observe smoothing of CM amplitudes with frequency as expected if only the classic CM components, originating near the cochlear base, were left intact. Thus, the new additional CM components appeared to be shaped by active cochlear processing near the stimulus tonotopic place and they could provide place-specific information about OHC function.

 

October 24, 2019

Malcolm Slaney, PhD, Research Scientist, Google Machine Hearing Research (Adjunct Prof., Stanford Univ., Dept. of Music)

Signal Processing and Machine Learning for Attention

Our devices work best when they understand what we are doing or trying to do. A large part of this problem is understanding to what we are attending. I’d like to talk about how we can do this in the visual (easy) and auditory (much harder and more interesting) domains. Eye tracking is a good but imperfect signal. Audio attention is buried in the brain and recent EEG (and ECoG and MEG) work gives us insight. These signal can be use to improve the user interface for speech recognition and the auditory environment. I’ll talk about using eye tracking to improve speech recognition (yes!) and how we can use attention decoding to emphasize the most important audio signals, and to get insight about the cognitive load that our users are experiencing. Long term, I’ll argue that listening effort is an important new metric for improving our interfaces. Listening effort is often measured by evaluating performance on a dual-task experiment, which involves divided attention.

Bio: BSEE, MSEE, and Ph.D., Purdue University. Dr. Malcolm Slaney is a research scientist in the AI Machine Hearing Group at Google. He is a Adjunct Professor at Stanford CCRMA, where he has led the Hearing Seminar for more than 20 years, and an Affiliate Faculty in the Electrical Engineering Department at the University of Washington. He has served as an Associate Editor of IEEE Transactions on Audio, Speech and Signal Processing and IEEE Multimedia Magazine. He has given successful tutorials at ICASSP 1996 and 2009 on “Applications of Psychoacoustics to Signal Processing,” on “Multimedia Information Retrieval” at SIGIR and ICASSP, “Web-Scale Multimedia Data” at ACM Multimedia 2010, and "Sketching Tools for Big Data Signal Processing” at ICASSP 2019. He is a coauthor, with A. C. Kak, of the IEEE book Principles of “Computerized Tomographic Imaging”. This book was republished by SIAM in their “Classics in Applied Mathematics” Series. He is coeditor, with Steven Greenberg, of the book “Computational Models of Auditory Function.” Before joining Google, Dr. Slaney has worked at Bell Laboratory, Schlumberger Palo Alto Research, Apple Computer, Interval Research, IBM’s Almaden Research Center, Yahoo! Research, and Microsoft Research. For many years, he has led the auditory group at the Telluride Neuromorphic (Cognition) Workshop. Dr. Slaney’s recent work is on understanding attention and general audio perception.  He is a Senior Member of the ACM and a Fellow of the IEEE.

 

October 31, 2019

Edward L. Bartlett, Ph.D., Professor, BIOL/BME

Age-related changes of temporal coding representations in the peripheral and central auditory systems

Representations of temporal modulation are critical for sound recognition, including recognition in background noise and sound segregation. Aging affects temporal coding in complex ways because changes in the cochlea and inner hair cell-auditory nerve synapses can lead to compensatory changes in more central auditory regions. Here I will discuss some of these age-related alterations that can be observed through scalp recordings of auditory evoked potentials, their relationships to age-related alterations in inferior colliculus activities, how these are related to temporal discrimination, and preliminary results regarding training-based improvements.

 

November 7, 2019

Andres Llico Gallardo, Ph.D. Student, Weldon School of Biomedical Engineering (Talavage Lab)

Improved Neural Responses in Cochlear Implants Using a Physiologically Based Stimulation Strategy: Preliminary Results 

Cochlear implants (CI) are implantable devices capable of partially restoring hearing loss by electrically stimulating the auditory nerve to mimic normal-hearing conditions. Resulting speech perception varies among CI users, depending mostly on their deafness and surrounding noise conditions. Current electrical-stimulation strategies are often developed following phenomenologically based approaches instead of being derived from known physiological functions of the auditory system. The framework developed in this study seeks to provide an optimized electrical-stimulation strategy by maximizing the similarity between simulated neural patterns elicited in the auditory nerve by acoustic and electrical means. Preliminary results show increased correlation and reduced mean square error between acoustic and electric stimulation when the proposed optimized strategy is used instead of a commonly used CI strategy that stimulates electrodes based on spectral energy content.

 

November 14, 2019

Joshua M. Alexander, Ph.D., Associate Professor, SLHS

Frequency Lowering – Is it in or is it out?

This talk will be given at the Annual Auditory & Vestibular Translational Research Day sponsored by the University of Maryland on November 18th. It is targeted at a diverse audience and is designed to be a broad perspective of frequency-lowering hearing aid technology. In this talk, I will critically examine the current state of frequency lowering in the clinic and research. I will then identify important intellectual barriers that are limiting progress in this area and how my unique research approach removes these barriers. Finally, I will conclude with how discoveries made with my approach suggest an important paradigm shift that may influence future research and advancements in frequency-lowering technology and influence how this technology is used in the clinic.

 

November 21, 2019

Aditi Gargeshwari, Ph.D. Student (Krishnan lab), SLHS

Authors: Ananthanarayan Krishnan1, Bram Van Dunn2, Harvey Dillon2 & Aditi Gargeshwari1

1Department of Speech, Language and Hearing Sciences, Purdue University, USA

2National Acoustics Laboratory, Chatswood, NSW, Australia

Human frequency following response: Correlates of spatial release from masking

Auditory stream segregation is the process by which a listener is able to differentiate the various auditory signals that arrive simultaneously at the ears and form meaningful representations of the incoming acoustic signals (Sussman et al, 1999). Auditory cues such as the perceived spatial location of sounds or the pitch of speakers’ voices help this process of segregating the total stream of sound (Bregman, 1990). Spatial release from masking (the improvement in the detection or reception threshold of a signal when spatially separated from competing sounds compared to when it is co-located with them) may account for this spatial auditory stream segregation. Here we examine whether the phase locked neural activity reflected in the brainstem frequency following response (FFR) exhibit spatial release from masking for a signal presented with either spatially co-located, or separated competing sounds. In this preliminary study, FFRs were obtained from normal-hearing young adults using a steady state vowel (/u/, d = 250 ms) as the target with its HRTF corresponding to 0 degree; and four-talker speech babble in each ear was used as the competing stimuli with HRTFs manipulated to produce either 0 degrees (spatially co-located with the target), or +/- 90 degrees (spatially separated from the target) azimuth. In Experiment 1, FFRs were obtained in quiet, with co-located speech babble (0 degrees), and with separated speech babble (+/- 90 degrees). Responses were obtained at +10 SNR for each of the masked conditions. In Experiment 2 responses were obtained for the binaural +/- 90 condition, and monaural (with stimuli to one ear turned off) presentation of the same stimuli to the left and right ear. Robust responses were observed for all conditions. Mean f0 magnitude reduction was significantly greater for BIN co-located condition compared to the spatially separated noise condition. Binaural, and summed monaural data were essentially similar when target was presented alone. Both, binaural and summed monaural responses for the separated conditions show magnitude reduction but the reduction was greater for the summed responses. No difference in magnitude between the BIN and MON sum for the quiet was seen, but f0 magnitude for bin +/- 90 condition was significantly greater than its monaural summed counterpart. These results suggest that binaural processing relevant to spatial release from masking (as opposed to a simple linear sum of monaural responses) may be reflected in the phase locked neural activity in the brainstem. The FFR in this spatial paradigm shows promise as an objective analytic tool to evaluate spatial processing in children with CAPD, and peripheral hearing loss.

 

December 5, 2019

Ivy Schweinzger, Ph.D., Post-doctoral Research Associate, SLHS

Examining the Physiologic Phenotype of Cochlear Synaptopathy Using Narrowband Chirp-Evoked Compound Action Potentials

Recent research in animals has found that following noise levels that induce a temporary threshold shift (TTS), there is permanent degeneration of ribbon synapses connecting auditory neurons to inner hair cells even though outer hair cell function has returned to normal. This leads to eventual degeneration of auditory nerve fibers (ANFs), specifically those of low-spontaneous rates (SR) and high-thresholds, which encode high-intensity sounds. This phenomenon has been termed cochlear synaptopathy.

The physiologic phenotype of cochlear synaptopathy presents as normal hair cell functioning and neural thresholds with degraded auditory nerve activity in response to high-intensity sounds, which is indicative of damage to low-SR ANFs. The purpose of this project was to expand on current animal research findings regarding noise-induced hearing impairment by comparing the auditory nerve activity evoked using a signal-in-noise action potentials (SiNAPs) technique to that evoked with both narrowband chirp and toneburst stimuli in quiet. Furthermore, using this technique, this study aimed to determine if music, a more human-typical exposure, produced the physiologic phenotype of cochlear synaptopathy when gerbils were exposed at levels deemed both safe and unsafe according to standards set for human hearing by the National Institute of Occupational Safety and Health (NIOSH).

Animals were separated into three groups: unexposed, a safe exposure group -exposed for 2 hours with a time-weighted average of 90 dBA -, and an unsafe exposure group – exposed for 2 hours with a time-weighted average of 100 dBA. Auditory brainstem responses (ABR) were measured pre-music exposure, immediately post-music exposure and two weeks post-music exposure. Compound action potential (CAP) responses were then recorded at the two-week post-exposure time point. Results showed that exposed animals had SiNAPs responses that were significantly degraded in amplitude as compared with SiNAPs responses for unexposed animals [F(9,1250)=188,p<.001]. The amplitude of responses shown with ABR amplitude-intensity functions did not significantly differ between the unexposed group and the safe and unsafe exposure groups at two weeks post-noise time points [F(2)=0.406, p=0.674]. However, there was a significant shift in ABR thresholds for both exposure groups immediately following the noise exposure. The recovery from TTS observed in ABR findings, coupled with the degraded auditory nerve responses to 2 kHz narrowband chirp SiNAPs at intense levels (i.e., 80 dB SPL) suggests damage to low-SR ANFs caused by the high-intensity music exposure.

Animals that were exposed to noise at safe levels showed similar auditory evoked potential amplitudes as did animals who were exposed at unsafe levels. These findings suggest that exposure to music at levels deemed “safe” can cause physiological changes at the auditory peripheral level that are suggestive of both cochlear synaptopathy and permanent anatomical damage to outer hair cells as well as nerve fibers in some frequency regions of the gerbil cochlea.  Moving forward, Mongolian gerbils may be an optimal translational model for research on noise-induced hearing loss given the similarity of the noise susceptible region with that of humans.

 

January 16, 2020     ***in MRGN 121*** 

Candidate for tenure-track faculty position in Hearing Science/Audiology

 

January 23, 2020   

Purdue @ ARO (multiple speakers/presenters)

The upcoming meeting of the Association for Research in Otolaryngology (ARO) features numerous presentations by Purdue affiliates. In particular, seven current students across multiple programs are gearing up to present as lead authors and four more current students have contributed as co-authors. The upcoming hearing seminar slot will be used by a subset of them to obtain feedback from the SHRP community in preparation for the conference.

 

January 30, 2020

Christian Stilp, Ph.D., Associate Professor, Psychological Sciences, University of Louisville

Spectral Contrast and Enhancement Effects in Speech Perception

All perception takes place in context. Objects and events in the environment are never perceived in isolation, but relative to surrounding stimuli. This is especially true in speech perception, as acoustic characteristics of surrounding sounds have powerful influences on perception of speech sounds. In this talk, I will discuss two classic effects of surrounding spectral context on perception: spectral contrast effects and auditory enhancement effects. I will show that speech sound categorization is exquisitely sensitive to both of these effects, which are related to each other at the individual differences level. I will also review the neural mechanisms thought to underlie these effects, and introduce data that seek to clarify where these effects occur in the auditory system.

 

February 6, 2020     ***in NLSN 1215*** 

Candidate for tenure-track faculty position in Hearing Science/Audiology

 

February 13, 2020

Emily X. Han, Ph. D. Candidate in BIO (PI: Ed Bartlett)

Short-Term and Long Term Sub-Cortical Auditory Pathophysiology Following Mild Blast Induced Trauma

Blast-induced hearing difficulties affect thousands of veterans and civilians each year. The long-term impact of blast exposure on the central auditory system (CAS) can last months, even years, without major external injury, and is hypothesized to contribute to many behavioral complaints associated with mild blast traumatic brain injury (bTBI). Our group has previously documented the short-term (two-weeks) and longer-term (one month) effects of acute blast and non-blast acoustic impulse trauma on click/tone pip and sinusoidally amplitude modulated (AM) carriers in adult rats. However, the mechanisms that underlie these long-term impairments are still poorly understood. Specifically, many measures of auditory function, including thresholds and DPOAEs either recover or exhibit subclinical deficits, thus masking deficits in processing complex, real-life stimuli under challenging contexts. Examining the acute time course and pattern of neurophysiological impairment (within the first two weeks), as well as the underlying molecular and anatomical post-injury environment, is therefore critical to better understanding and intervention towards bTBI-induced CAS impairments. This study aims to uncover central neurophysiological deficits related to “hidden hearing loss” in acute blast and non-blast acoustic impulse trauma in adult Sprague-Dawley rat model over the course of the first two weeks. Here, we recorded the changes in auditory brainstem response (ABR) and auditory evoked potential (AEP) response to amplitude modulation (AM) and speech-like stimuli (iterated rippled noise pitch contours) in acute mild blast and non-blast acoustic impulse exposed adult Sprague-Dawley rat over the course of two months. In conclusion, the current study suggests that primary, physical damage and secondary, biochemical damage, while related, may take effect on auditory pathophysiology at different time courses. We confirmed that CAP deficits can be better elucidated by increasingly complex processing tasks. Ultimately this research can inform improved diagnostic and therapeutic strategies for bTBI related deficits.

 

February 17, 2020     ***Special Day/Time:  MONDAY 1230-120 in MRGN 121*** 

Candidate for tenure-track faculty position in Hearing Science/Audiology

 

February 27, 2020

William Salloom (PhD Candidate, SLHS/PULSe, Strickland lab)

The effect of broadband elicitor duration on transient-evoked otoacoustic emissions and a behavioral measure of gain reduction

Humans are able to encode sound over a wide range of intensities despite the fact that neurons in the auditory periphery have much smaller dynamic ranges. There is a feedback system that originates in the brainstem that may help solve the dynamic range problem. This system is the medial olivocochlear reflex (MOCR), which is a bilateral sound-activated system which decreases amplification of sound by the outer hair cells in the cochlea. Much of the previous research on the MOCR in animals and humans has been physiologically based, and has used long broadband noise elicitors. However, the effect of the duration of broadband noise elicitors on similar behavioral tasks is unknown. In the current study, we explored the effects of ipsilateral broadband noise elicitor duration both physiologically and behaviorally in the same subjects. Understanding these effects is not only of fundamental importance to how the auditory system adapts to sound over time, but is also of practical importance in laboratory settings that use broadband noise to elicit the MOCR.

 

March 5, 2020

Jane Burton, AuD (PhD Candidate, Neuroscience Graduate Program, Vanderbilt University)

Linking Perceptual Performance with Cochlear Histopathology in Two Nonhuman Primate Models of Noise-Induced Hearing Loss

Hearing loss causes perceptual deficits relating to spectral, temporal, and spatial processing, which have been well characterized in humans. However, these deficits are quite variable among individuals, even those with similar clinical audiometric presentations. Some of this variability may be due to differences in underlying cochlear pathology, specifically differences in the loss of inner and outer hair cells and ribbon synapses. Identifying behavioral assays sensitive to these histopathological differences will improve clinical diagnostics, especially for patients with hearing difficulties that are currently undetected with standard diagnostic tools. In order to directly examine the relationship between cochlear pathology and perceptual deficits, we employed a comprehensive behavioral test battery that probed hearing sensitivity and spectral, temporal, and spatial processing in our macaque models of noise-induced sensorineural hearing loss (SNHL) and noise-induced synaptopathy (SYN). We identified distinct patterns of perceptual deficits that differentially predicted the type (SNHL vs. SYN), frequency range, and, to some extent, the severity of noise-induced cochlear pathology. These findings provide an essential and direct link between cochlear pathophysiology and the perceptual consequences of hearing loss, elevating the specificity of clinical diagnostics as a foundation for forthcoming therapeutic treatment options.

 

March 12, 2020

Alexander L. Francis, Associate Professor of Speech, Language, and Hearing Sciences

Psychophysiological correlates of effort related to different listening conditions

Listeners vary widely in their ability to understand speech in adverse conditions. Evidence suggests that differences in both cognitive and linguistic capacities play a role, but these factors may contribute deferentially depending on the specific listening challenge. In this study, we evaluate the contribution of individual differences in age, hearing thresholds, vocabulary, selective attention, working memory capacity, personality traits, and noise sensitivity to variability in measures of story comprehension and listening effort in two listening conditions: (1) native-accented English speech masked by speech-shaped noise and (2) non-native accented English speech without masking.  Masker levels were adjusted individually to ensure comparable word recognition performance across conditions within participants. Dependent measures included comprehension tests results, self-rated effort, and electrodermal, cardiovascular, and facial electromyographic measures associated with listening effort.  Results showed varied patterns of responsivity across different dependent measures as well as across the two listening conditions. In particular, results suggested that working memory capacity and vocabulary may play a greater role in the comprehension of non-native accented speech than noise-masked speech, while hearing acuity and personality may have a stronger influence on understanding speech in noise. Finally, we argue that electrodermal measures may be more closely associated with affective response to noise-related interference while cardiovascular measures may be more strongly affected by demands on working memory and lexical access.