Skip navigation

Seminars in Hearing Research at Purdue

 

PAST: Abstracts

Talks in 2015-2016

[LYLE 1150: 1030-1120am]

Link to Schedule

 

September 3, 2015

Alex Francis, PhD (Francis lab)

What I did on my summer vacation: Results from two studies of autonomic responses to informational masking

 

In this talk I will present the results of two experiments employing psychophysiological measures to assess listening effort. This work was supported by a year-long fellowship for study in a second discipline, and has led to the development of a new focal line of research in my lab. Study 1 was conducted in collaboration with Adriana Zekveld at the Vrije Universiteit Medical Center in Amsterdam, and looks at the pupillary response to listening to speech in competing speech as a function of listeners’ expertise in a second language.  Results from study 1 showed that individuals who are more proficient in a foreign language exert greater effort to understand speech when that language serves as a masker than do those less proficient in the language. These results are consistent with the hypothesis that one source of effort in understanding speech in competing speech is competition between lexical items activated in the target and masking streams.  Study 2 was conducted here at Purdue with assistance from Anne Smith’s lab, and examines cardiovascular and electrodermal correlates of effortful listening under three different adverse conditions, two involving masking (with noise and with competing speech) and one involving distortion without masking (computer speech synthesis). Results suggest that understanding speech in the two masked conditions has a stronger effect on specific cardiovascular responses than does distortion, even when behavioral measures of listening performance and listeners’ subjective sensation of effort are comparable across the three conditions. These results are consistent with the hypothesis that listeners are employing selective attention more strongly in the two masked conditions, but may also indicate an affective (aversive) response to the presence of masking noise. Further research is proposed to distinguish between these possibilities, and to investigate the possibility of linking these results to epidemiological studies the effects of chronic exposure to noise on cardiovascular health.

 

 

September 10, 2015

Mike Heinz, PhD (Heinz lab)

Neural modeling to relate individual differences in physiological and perceptual responses with sensorineural hearing loss

 

A great challenge in diagnosing and treating hearing impairment comes from the fact that people with similar degrees of hearing loss often have different speech-recognition abilities. Many studies of the perceptual consequences of peripheral damage have focused on outer-hair-cell (OHC) effects; however, anatomical and physiological studies suggest that many common forms of sensorineural hearing loss (SNHL) arise from mixed OHC and inner-hair-cell (IHC) dysfunction. Thus, individual differences in perceptual consequences of hearing impairment may be better explained by a more detailed understanding of differential effects of OHC/IHC dysfunction on neural coding of perceptually relevant sounds. Whereas it is difficult experimentally to estimate or control the degree of OHC/IHC dysfunction in individual subjects, computational neural models provide great potential for predicting systematically the complicated physiological effects of combined OHC/IHC dysfunction. This presentation will review important physiological effects in auditory-nerve (AN) responses following different types of SNHL and the ability of current AN models to capture these effects. In addition, the potential for quantitative spike-train metrics of temporal AN coding to provide insight towards relating these differential physiological effects to differences in speech intelligibility will be discussed.

 

September 17, 2015

Matt Goupell, PhD (Dept. of Hearing and Speech Sciences, University of Maryland)           

 

Binaural advantages for understanding in noise in bilateral cochlear-implant users

 

A primary reason for having two ears is to improve speech understanding in complex auditory environments with multiple sound sources. As the number of people who receive bilateral cochlear implants (CIs) increases, it is a goal to provide the best possible understanding of speech in noise. However, clinical processors are not designed to faithfully preserve binaural information, which limits how well bilateral implantees function. In this talk, we will discuss psychophysical results from simple electrical stimulation patterns that highlight some of the basic binaural processing limitations of bilateral cochlear implants. Then we will discuss a series of experiments that investigate what binaural advantages can be achieved with bilateral CIs for speech stimuli, as well as the barriers that limit those advantages. Finally, we will discuss how we might subvert those barriers, either through new devices and speech processing strategies, or through alternative approaches to clinical device setting.

 

September 24, 2015

Ankita Thawani  (BIO, Fekete lab )

Investigating Cellular Patterning in the Developing Chicken Auditory Organ

 

Similar to the mammalian organ of Corti, the auditory organ in the bird, called the basilar papilla (BP), has 2 mechanosensory hair cell populations that are differentially innervated by afferent versus efferent axons.  In the BP, tall hair cells are found on the neural side and short hair cells on the abneural side.  Our lab is studying how Wnt signaling plays a role in patterning the prosensory progenitors to give rise to these different cell fates. Sienknecht and Fekete (2008) published a comprehensive Wnt expression study and found that Wnt9a transcripts are present adjacent to the neural edge of the prosensory BP from embryonic day 5 (E5), before any radial asymmetry is evident. Wnt9a ligand secreted from this domain might generate a protein gradient across the prosensory field. When Wnt9a is overexpressed via retroviral gene delivery in the E3 otocyst, the entire mature BP assumes a phenotype that is normally confined to the neural half of the organ.  Being present at the right time and place suggests that Wnt9a ligand may function as a morphogen to pattern the radial axis and give the tall and short hair cells their identity. To support this hypothesis, we seek evidence for a gradient in Wnt signaling in the normal organ and begin by looking at a readout for canonical Wnt/β-catenin signaling. We used an antibody designed specifically against the activated β-catenin, which is a transcriptional coactivator that plays a pivotal role in canonical Wnt signaling. Control E6 BPs show a gradient of nuclear β-catenin fluorescence intensity along the radial axis, higher on the neural side, supporting the hypothesis that Wnt9a acts via the canonical Wnt pathway. At E7-7.5, the spatial variations in β-catenin intensity are less apparent, although Wnt9a-infected BPs are wider, have more mitotic cells and display an upregulation of known downstream genes. The nuclear β-catenin levels are moderately increased as well, but the observed augmentation is less robust than expected.  Perhaps the responsiveness of the BP to secreted Wnt ligands is being restrained, for example by the presence of secreted inhibitors, receptor availability or some negative feedback mechanism.

 

 

October 1, 2015

Erica Hegland (PhD student, SLHS, Strickland Lab)

Suppression, gain, and adaptability in younger and older adults

 

Our auditory systems automatically adjust to the acoustic environment around us, to allow us to pick out distinct sounds, and to help us avoid potential dangers.  One important mechanism that enables this adjustability is the medial olivocochlear reflex (MOCR).  The MOCR is a sound-evoked, efferent reflex with a “sluggish” onset of approximately 25 ms, and it decreases cochlear gain in a level- and frequency-dependent manner.  There is some evidence that the strength of the MOCR may decrease with age, thus decreasing the adjustability of the auditory system.  This is one potential reason why older adults struggle to understand speech in noise.  Furthermore, the endocochlear potential may also decrease with age.  The endocochlear potential acts as the “battery” of the inner ear.  Decreased endocochlear potential impacts outer (as well as inner) hair cell function and thus could alter cochlear nonlinearity.  One measure of cochlear nonlinearity is two-tone suppression, e.g., a decrease in the cochlea’s response to one tone when another tone is presented simultaneously.  There is evidence that elicitation of the MOCR decreases suppression – creating another mechanism for adaptability – but the idea that suppression can adapt is not yet well-accepted.  There is conflicting evidence on whether suppression decreases or changes with age.  Thus, in older adults, suppression and/or the MOCR may decrease; both potentially reducing adaptability of the auditory system.  In this study, gain and suppression were measured in younger and older adults using a forward masking paradigm with stimuli too short to be affected by the MOCR.  Suppression was measured with suppressors above and below the signal frequency.  Older adults were found to have significantly less suppression and lower gain estimates.  The effect of precursor noise and tones on suppression estimates was investigated in young adults.  Both tonal and noise precursors were presented prior to the suppression stimuli.  Suppression reduced in the presence of some precursors and not others.  The pattern of suppression adaptability in younger adults and predictions on adaptability in older adults will be discussed.

 

October 8, 2015

Jesyin Lai (PhD student, PULSe, Bartlett Lab)

Age-related Differences in Envelope-Following Responses at Matched Peripheral or Central Activation

 

Previous work in aging has struggled with appropriate comparisons of animals across age due to changes in hearing threshold as well as changes in evoked potentials due to age-related neuropathy, which includes cochlear neural degeneration of synapses and spiral ganglion cell bodies. Central auditory neurons may adjust their gain in aged animals to compensate for diminished cochlear neural input. This study compares central temporal processing abilities of young and aged Fischer 344 rats when sound levels were matched according to amplitudes of auditory brain stem responses (ABRs) or envelope-following responses (EFRs). Matching ABR amplitudes should putatively match auditory nerve activation across age, while matching EFR amplitudes at 100 % AM depth may identify whether population synchronization is robust to reduced salience of modulation. Stimulus intensities presented to young animals were manipulated to match to aged animals’ mean tone 8 kHz ABR wave I amplitudes at 85 dB SPL or EFR amplitudes evoked by AM tones presented at 85 dB SPL. EFRs were measured using sinusoidally amplitude modulated (SAM) tones with various modulation depths (3-100 %) and modulation frequencies (16-2048 Hz) presented in a quiet or noisy background. In quiet condition, EFRs of SAM tones (45-256 Hz) with various modulation depths were similar in young and aged animals when peripheral and central activations were equalized.  At lower AM frequencies (≤ 90 Hz) and matched peripheral inputs, the EFRs of temporal modulation transfer functions (tMTFs), at 100 % AM depth, were larger in the aged than the young suggesting the presence of an increase in central gain of temporal processing in aged animals. However, increase in central gain was not observed when AM depth of tMTFs was adjusted to 25 %. The presence of notch-noise suppressed EFRs of young animals especially at larger modulation depths and at 256 Hz AM frequency. EFRs of aged animals were not suppressed by background noise probably due to a decreased or an absence of central inhibition. Since width of notches does not incur an effect on EFRs, AM tones with either 100 or 25 % modulation depth were presented in the presence of low- or high-pass noise at signal-to-noise ratios of 60 to 0. The results showed that high-frequency energy suppressed EFRs of young animals at 40-SNR and aged animals at 20-SNR. The overall results agree with our previous finding that central gain increases with age and the finding that listeners with age-related neuropathy have normal auditory detection in quiet.

 

October 15, 2015

Mark Sayles, M.D., Ph.D. (Laboratory of Auditory Neurophysiology, KU Leuven, Belgium)

Cochlear Non-Linearities in Hearing-Impaired Mammals

Acoustic-signal transduction in the healthy cochlea is highly non-linear. Basilar-membrane response grows compressively with increasing input intensity. This compressive non-linearity reflects normal outer-hair-cell function, and manifests as suppressive spectral interactions; the system’s response to frequency “A” can be reduced by simultaneous energy at frequency “B”. Such frequency-dependent non-linearities are important for neural coding of complex sounds, e.g., speech. Acoustic-trauma-induced outer-hair-cell damage is associated with response linearization and consequent degradation in signal coding; contributing to reduced speech intelligibility for hearing-impaired listeners, especially in noisy environments. Auditory prostheses attempt to restore non-linearities with, e.g., “multi-channel dynamic compression” algorithms. However, their design is currently limited by the absence of detailed quantitative descriptions of suppression in response to ecologically-relevant supra-threshold broadband sounds in hearing-impaired mammals. Here we used systems-identification techniques to quantify suppression in normal-hearing and hearing-impaired chinchilla auditory-nerve-fiber responses to broadband Gaussian noise.

To induce hearing loss, chinchillas were exposed to an intense narrowband noise under anesthesia, and allowed to recover for several weeks. Large populations of single-fiber spike-train responses to broadband noise were recorded from the auditory nerve in normal-hearing and hearing-impaired animals. We used spike-triggered reverse-correlation techniques to estimate the strength and timing of even-order suppressive interactions between frequency components (singular-value decomposition of the second-order Wiener kernel).

Hearing-impaired animals had elevated single-fiber excitatory thresholds (by ~20-40 dB) and broadened frequency tuning, consistent with outer-hair-cell damage. Most normal-hearing fibers showed strong suppression for frequencies above, below, and within the excitatory band. Responses from fibers in hearing-impaired animals showed unexpected characteristic-frequency-dependent changes in their patterns of suppression. For mid-frequency fibers (2-5 kHz), suppression was largely absent. High-frequency fibers (>5 kHz) innervating the basal part of the cochlea showed a loss of high-frequency suppression; however, strength of suppression for low-frequency sounds was increased in these fibers. For low-frequency fibers (<2 kHz) innervating the apical part of the cochlea, suppression was weak, and differed very little between normal and impaired animals.

Overall, hearing loss generally reduced the strength of suppressive non-linearities in auditory-nerve-fiber responses to broadband noise. More specifically, our data demonstrate important, previously unreported, frequency-dependent differences in the timing and tuning of suppressive non-linearities between normal and impaired cochleae. These data have potential to guide improvements in novel auditory-prosthesis amplification strategies, particularly for complex listening situations (e.g., speech in noise). Moreover, these analysis techniques may prove useful for understanding age-related changes in the balance of excitation and inhibition in central auditory-system neurons and find applications in the design of auditory-brainstem and auditory-midbrain implantable devices.

 

October 22, 2015

Chandan Suresh (PhD student, SLHS, Krishnan Lab)

Human frequency following responses to vocoded speech with amplitude modulation, and amplitude plus frequency modulation.

                          

The speech processing strategy in most cochlear implants only extracts and encodes amplitude modulation (AM) in a limited number of frequency bands. Zeng et al (2005) proposed a novel speech processing strategy frequency amplitude modulation encoding (FAME) that encodes both AM and frequency modulations (FM) to improve cochlear implant performance. Using behavioral assessment, they reported better speech, speaker and tone recognition with this novel strategy in individuals with normal hearing. In this study, we used scalp recorded human frequency following responses (FFR), a non-invasive neural measure representing both envelope and temporal fine structure information of complex sounds, to evaluate the nature of representation of AM, and FAME vocoded diphthong /au/ using 2, 4, 8 and 16 channels.  The results of the study indicate that the neural representation of FAME stimuli are superior to AM alone stimuli when compared at 2, 4, 8 and 16 channels. The spectral slice data of vowels (/a/, /u/) and the diphthong also revealed better representation of speech cues with FAME than AM alone. Interestingly, the recorded responses of 16 Channel FAME were equivalent to control stimulus, and 2 channel FAME responses were comparable to 8 channel AM alone responses. The stimulus to response spectral correlation analysis revealed a correlation coefficient of three times greater for FAME compared to AM alone for responses using the 8 and 16 channel stimuli. This better encoding of complete harmonic structure for FAME strategy compared to AM alone may suggest better representation of pitch relevant information. Taken together, these results suggest that neural information preserved in the FFR may be used to evaluate signal-processing strategies considered for cochlear implants.

 

October 29, 2015

Christoph Scheidiger (PhD student, Hearing Systems Group, Tech. Univ. Denmark)

Towards a model of speech intelligibility in hearing impaired listeners

 

Christoph Scheidiger and Torsten Dau

Hearing Systems group, Department of Electrical Engineering, Technical University of Denmark, DK-2800, Kgs. Lyngby, Denmark

 

Work on modeling speech intelligibility (SI) started with the articulation index (AI) in the early 1920s. Models following the AI were extended to consider a larger variety of listening conditions. Some recent studies predicted SI in normal-hearing (NH) listeners based on a signal-to-noise ratio measure in the envelope domain (SNRenv, Jørgensen and Dau, 2011 and Jørgensen et al, 2013). This framework showed good agreement with measured data in a broad range of conditions, including stationary and modulated interferers, reverberation, and spectral subtraction. The presented study investigates to what extent effects of hearing impairment (HI) on SI can be modeled within this framework. A first fundamental step towards that goal was to modify the model to account for SI in NH listeners with band-limited stimuli, which resembles SI in HI listeners. The presented model combines the auditory processing of the multi-resolution speech-based envelope power spectrum model (mr-sEPSM; Jørgensen et al., 2013) and a correlation-based decision metric inspired by the short-time objective intelligibility measure (STOI; Taal et al., 2011). The proposed model was tested in conditions of stationary noise, fluctuating interferers, spectral-subtraction noise-reduction algorithms, phase-jitter distortions, reverberation, and with band-limited speech. In a second step, the loss of sensitivity of HI listeners was incorporated into the model. Simulations show that, by only accounting for the sensitivity loss, the model predictions agree with masking-release (MR) data measured from thirteen HI listeners. Further steps for modeling other deficits typically observed in HI listeners will be briefly outlined at the end of the talk.

 

November 5, 2015

Josh Alexander, Ph.D (SLHS, Purdue Experimental Amplification Research Lab)

The Case of the Disappearing and Reappearing ‘s’: Factors that Influence Speech Perception in Hearing Aids using Nonlinear Frequency Compression

It has been documented that hearing aid users, especially children, often have limited access to important high-frequency speech information, such as /s/.  For listeners with mild to moderate sensorineural hearing loss (SNHL), this can occur because the miniature receivers in hearing aids are unable to provide sufficient high-frequency amplification or cannot do so without audible whistling and overtones caused by feedback or without causing unacceptable sound quality and/or excessive loudness.  For listeners with more severe SNHL, the inner hair cells that code these frequencies may be absent or non-functioning, possibly rendering amplification in this region less useful.  Frequency-lowering techniques have been discussed as a means of re-introducing high-frequency speech cues not only for listeners with moderately-severe to profound SNHL, but also for listeners with mild to moderate SNHL.  The premise behind all frequency-lowering techniques is to use all or part of the aidable low-frequency spectrum to code parts of the inaudible high-frequency spectrum important for speech recognition.

By using a controlled laboratory design to vary parameters that regulate nonlinear frequency compression (NFC), this study examined how different ways of repackaging inaudible mid- and/or high-frequency information at lower frequencies influences the perception of consonants and vowels embedded in low-contextual stimuli.  The research question focused on a related clinical question faced by audiologists who fit this technology: “what is the best way to implement the technology for a particular aided speech spectrum (bandwidth of audibility), to make the most use out of the hearing aid user’s residual auditory capabilities?”  For this reason, low-pass filtering and the selection of NFC parameters was used to fix the output bandwidth (BW) at one of two frequencies representing a severe to profound (3.3 kHz) or a mild to moderate (5.0 kHz) BW restriction.  The effects of different combinations of NFC start frequency and input BW (by varying the compression ratio, CR) were examined using two groups of listeners, one for each output BW.

 

November 12, 2015

Daniel Carr, M.S. (Mech Eng, Herrick Acoustics Lab - Patricia Davies)

Two Laboratory Studies of People’s Responses to Sonic Booms and Other Transient Sounds as Heard Indoors

Manufacturers of business jets have expressed interest in designing and building a new generation of supersonic jets that produce shaped sonic booms of lower peak amplitude than booms created by the previous generation of supersonic aircraft. To determine if these “low" booms are less intrusive and the noise exposure is more acceptable to communities, new laboratory testing to evaluate people's responses must occur. To guide aircraft design, objective measures that predict human response to modified sonic boom waveforms and other impulsive sounds are needed. The current research phase is focused on understanding how people will react to booms when heard inside, and must therefore include considerations of house type and the indoor acoustic environment.  A test was conducted in NASA Langley's Interior Effects Room (IER), with the collaboration of NASA Langley engineers. This test was focused on the effects of low-frequency content and of vibration, and subjects sat in a small living room environment.  A second test was conducted in a sound booth at Purdue University, using similar sounds played back over earphones.  The sounds in this test contained less very-low-frequency energy due to limitations in the playback, and the laboratory setting is a less natural environment.  For the purpose of comparison, and to improve the robustness of the human response prediction models, both sonic booms and other more familiar transient sounds were used in the tests.  In the Purdue test, binaural simulations of the interior sounds were included to compare responses to those sounds with responses to playback of binaural recordings taken in the IER.  Major conclusions of this research were that subject responses were highly correlated between the two tests, and that annoyance models including Loudness, maximum Loudness Derivative, Duration, and Heaviness terms predicted annoyance accurately.

 

November 19, 2015

Kelly Ronald (BIO, Lucas lab)

Estrogen mediated changes in the auditory system: implications for songbird mating preferences

 

Many organisms experience seasonal changes in auditory functioning that are mediated by fluctuations in steroid hormones. Estrogen, in particular, is thought to up-regulate auditory processing to increase the salience of mating signals during the breeding season. We examined the influence of estrogen on the frequency selectivity and sensitivity in female brown-headed cowbirds (Molothrus ater), a songbird that evaluates male song during mate-choice. We found that female estrogen profiles significantly influenced the shape of the audiogram and the tuning of auditory filters. We will discuss how these estrogen mediated changes may allow the females to discriminate between the relevant portions of the male cowbird song.

 

November 26, 2015

THANKSGIVING

 

January 14, 2016

Prof Patricia Davies (Mechanical Engineering, Herrick Acoustics Lab)

Predicting the Impact of Aircraft Noise

Annoyance and sleep disturbance are two impacts of aircraft noise on communities around airports.  Currently, Day Night Level (DNL), a metric based on the average A-weighted sound pressure level outdoors is used to assess the impact of the noise.  There are arguments made that we should look for a better annoyance metric than DNL. Some are based on an increased understanding of how sound is perceived and the importance of sound characteristics, in addition to loudness, that can affect annoyance.  Others argue that the number of aircraft noise events influences annoyance and take into account the maximum level of the events, and that the average energy approach of most environmental noise metrics does not capture this.  There is a 10 dB penalty for noise events at night incorporated into DNL, but sleep disturbance is a function of individual event characteristics and this penalty does not properly account for increased awakening due to aircraft noise nor is it related to the effect that noise exposure may have on sleep structure.  A sleep disturbance model and how it was modified to predict oscillations between in lighter stages of sleep and to account for aircraft noise exposure is described.  It can be used to predict the sleep patterns and thus the amount of time spent in deep sleep, REM sleep or awake can be predicted using the model. An illustration of how it can be used to predict the impact on sleep of aircraft movements around an airport at night is given.    The seminar will conclude with some comments on potential future research in this area.

 

Bio: Patricia Davies received her B.Sc. in Mathematics from the University of Bristol, and her M.Sc. and Ph.D. in Sound and Vibration from the University of Southampton, both in the UK. She is currently a Professor of Mechanical Engineering at Purdue and teaches courses in measurements, controls, signal processing and mechanics.  Dr. Davies is the Director of the Ray W. Herrick Laboratories, where she also conducts research in the areas of sound perception, signal processing, and nonlinear system identification.  Her research has been funded by government and by industry.  She has applied her research to modeling of human response to machinery and transportation noise; modeling the dynamics of viscoelastic materials and seat-occupant systems; visualization of automobile noise sources during pass-by tests; predicting machinery failure; and development of automatic analysis tools for classification of infant and mother laughter. She co-founded a Perception-based Engineering Center at Purdue, a collaborative research group of engineering and psychology professors.  Dr. Davies is a Fellow of the Institute of Noise Control Engineering and served as its President from 2008 to 2010. 

Contact Information: Ray W. Herrick Laboratories, School of Mechanical Engineering, Purdue University, 177, S. Russell Street, West Lafayette, IN 47907-2099, USA. E-mail: daviesp@ecn.purdue.edu

 

 

January 21, 2016

Mike Heinz, PhD (SLHS/BME)

Extending the envelope power spectrum model for speech intelligibility to neural spike-train responses

Recent psychophysically based modelling has demonstrated that the signal-to-noise ratio (SNRENV) at the output of a modulation filter bank provides a robust measure of speech intelligibility (Jørgensen and Dau, 2011). The effect of the noise (N) on speech (S) coding is assumed to: 1) reduce envelope power of S+N by filling in the dips of clean speech, and 2) introduce a noise floor due to intrinsic fluctuations in the noise itself. SNRENV predicted speech intelligibility across a wider range of degraded conditions than many long-standing speech-intelligibility models (e.g., STI). While the promise of the SNRENV metric has been demonstrated for normal-hearing listeners, it has yet to be thoroughly extended to hearing-impaired listeners because of limitations in our physiological knowledge of how sensorineural hearing loss (SNHL) affects the envelope coding of speech in noise relative to noise alone. Here, envelope coding to non-periodic stimuli (e.g., speech in noise) was quantified from model neural spike trains using shuffled correlograms, which were analyzed in the modulation frequency domain to compute modulation-band based estimates of signal and noise envelope coding (e.g., a neural SNRENV metric). Preliminary spike-train analyses show strong similarities to the speech envelope power spectrum model of Jørgensen and Dau (2011). While these preliminary neural predictions are shown here primarily to demonstrate feasibility of neural computations of SNRENV from spike-trains responses, they suggest that individual differences may occur based on the differential degree of outer- and inner-hair-cell (OHC/IHC) dysfunction of listeners currently diagnosed into the single category of SNHL. These neural computations will be applied in future animal studies to quantify the effects of various types of SNHL on coding of speech and inherent noise modulations, which may provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

 

 

January 28, 2016

Katie Scott (BIO, Fekete lab)

The Role of Wnt9a in the Radial Patterning of the Chicken Basilar Papilla

The basilar papilla (BP) is the auditory organ of the chicken and is the equivalent of the organ of Corti in the mammalian cochlea. Sensory hair cells, in the BP and are classified into two types based on morphology – tall and short. Tall hair cells are located on the neural side of the BP and are innervated primarily by afferent neurons (the neural-side identity). Short hair cells are located on the abneural side of BP and are primarily innervated by efferent neurons (the abneural-side identity). Early in embryonic development, asymmetry in the expression of Wnt9a mRNA suggests that there is a Wnt9a protein gradient across the sensory organ that is highest on the neural side. Overexpression of Wnt9a results in the development of the neural-side identity across the entire width of the BP. We hypothesize that high levels of Wnt9a are instructive for the neural-side identity and that axon guidance factors are acting downstream of Wnt9a to impact afferent innervation. To test the hypothesis that Wnt9a is instructive for the neural-side identity, we have designed experiments to knockdown Wnt9a transcripts in developing BPs. We have compared retroviral and transposase mediate gene transfer methods for their ability to deliver short-interfering RNAs against Wnt9a.  We have found that retroviral transduction methods have shown to be unreliable to deliver siRNAs, although we are still testing transposase mediated transduction. To test the hypothesis that axon guidance factors are acting downstream of Wnt9a, we have run RNA sequencing and real time – quantitative polymerase chain reaction (RT-qPCR) experiments on control and Wnt9a overexpressing BPs. We found that ephrin-A5, semaphroin-3D, and Semaphorin-3F, three known axon guidance molecules, are downstream of Wnt9a. These guidance factors have been shown to have a repulsive effect on neurons and may be present in certain populations of sensory cells to prevent inappropriate innervation. 

 

February 4, 2016

Brandon Coventry (BME, Bartlett lab)

From Swarms to Senses: Elucidating inferior colliculus synaptic mechanisms using computational modeling and swarm intelligence

The inferior colliculus is a major integrative center of the auditory system, receiving excitatory projections from the cochlear nucleus and superior olivary complex and inhibitory inputs from the lateral lemniscus and superior paraolivary complex. Generation of complex neural responses arises from a fine balance of excitatory and inhibitory inputs. Central auditory processing pathologies such as age-related hearing loss are correlated with the offset of this fine balance. A major obstacle in understanding auditory processing deficits is estimating excitatory and inhibitory synaptic inputs that give rise to integrated IC responses. To address this problem, we utilize biophysically accurate conductance-based IC computational models and a new particle swarm optimization variant to recreate in vivo frequency and SAM tuning response curves. We show that this method can be utilized to estimate and make experimentally testable hypotheses in synaptic input/output functions.

 

February 18, 2016

Varsha Rallapalli (SLHS, Alexander lab)

Neural modeling to relate individual differences in physiological and perceptual responses with sensorineural hearing loss

Recent psychophysically based modelling has demonstrated that the signal-to-noise ratio (SNRENV) at the output of a modulation filter bank provides a robust measure of speech intelligibility (Jørgensen & Dau, 2011). The effect of the noise (N) on speech (S) coding is assumed to: 1) reduce envelope power of S+N by filling in the dips of clean speech, and 2) introduce a noise floor due to intrinsic fluctuations in the noise itself.  SNRENV predicted speech intelligibility across a wider range of degraded conditions than many long-standing speech-intelligibility models (e.g., STI).  While the promise of the SNRENV metric has been demonstrated for normal-hearing listeners, it has yet to be thoroughly extended to hearing-impaired listeners because of limitations in our physiological knowledge of how sensorineural hearing loss (SNHL) affects the envelope coding of speech in noise relative to noise alone. Here, envelope coding to speech in noise was quantified from model neural spike trains using shuffled correlograms, which were analyzed in the modulation frequency domain to compute modulation-band based estimates of signal and noise envelope coding (e.g., a neural SNRENV metric). Preliminary spike-train analyses show strong similarities to the speech envelope power spectrum model of Jørgensen and Dau (2011). These preliminary neural predictions primarily demonstrate feasibility of neural computations of SNRENV from spike-trains responses, but also suggest that individual differences may occur based on the differential degree of outer- and inner-hair-cell (OHC/IHC) dysfunction of listeners currently diagnosed into the single category of SNHL.

 

 

March 3, 2016

Olaf Strelcyk PhD (Sonova, Warrenville Illinois) 

Perception of talker facing orientation and its effects on speech perception by normal-hearing and hearing-impaired listeners

Despite a vast body of research on normal-hearing (NH) and hearing-impaired (HI) listeners' speech perception in multitalker situations, the perception and effects of talker facing orientation have received only very little attention. Facing orientation here refers to the direction that a talker is facing in, as seen from a listener's perspective, e.g., whether a talker is directly facing the listener or looking in another direction. Two studies will be presented. The first one assessed how well listeners could identify the facing orientation of a single talker in quiet. The second study examined the importance of facing orientation for speech perception in situations with multiple talkers. Digit identification was measured for a frontal target talker in the presence of two spatially separated interfering talkers reproduced via loudspeakers. Both NH and HI listeners performed significantly better when the interfering talkers were simulated to be facing away. Facing-orientation cues enabled the NH listeners to sequentially stream the digits. The HI listeners did not stream the digits and showed smaller benefits, irrespective of amplification. The results suggest that facing orientation cannot be neglected in the exploration of speech perception in multitalker situations.

March 10, 2016

Heinz Lab (Suyash Joshi, Hearing Systems Group, Tech. Univ. Denmark)

Modelling auditory nerve responses to electrical stimulation

Cochlear Implants (CI) stimulate the auditory nerve (AN) with trains of symmetric biphasic pulses amplitude modulated with the envelope of the desired acoustic signal. Although this stimulation strategy has been successful in restoring some speech understanding in CI listeners, the listeners face considerable challenges in understanding speech in noisy backgrounds. CI listeners also experience difficulties in perceiving pitch or the location of sounds, suggesting significant deficits in envelope coding. Understanding the stimulus-response relationship of the AN fibers to an electrical stimulus thus seems crucial in order to develop stimulation strategies that will enhance the fidelity of envelope coded in the CI listeners. For this purpose, a simple computational model was developed using reported AN-fiber responses to electrical stimulation. The model is based on the observation that the extracellular electrical field produced by the CI electrodes can result in multiple sites of spike generation along the AN fiber. The site of spike generation largely determines the delay with which the spikes arrive in the cochlear nucleus and therefore may affect the synchrony with the stimulus input. The model was parameterized using AN-fiber responses to either single or paired-pulse stimulation and was tested with dynamic stimuli such as pulse trains. The model was then applied to test the effect of stimulation pulse rate on amplitude modulation detection thresholds of CI listeners.

 

March 24, 2016

Kristina Milvae (SLHS, Strickland lab)

Dynamic adjustment of gain in the peripheral auditory system: Does frequency matter?

Sensory systems adjust to the environment to maintain sensitivity to change.  In the auditory periphery, a possible mechanism of this ability is the medial olivocochlear reflex (MOCR).  The MOCR is a physiological mechanism that reduces cochlear gain in response to sound.  The strength of human ipsilateral cochlear gain reduction across frequency is not well understood.  Otoacoustic emissions (OAEs) have been used as a noninvasive tool to investigate cochlear gain reduction.  The largest effects have been seen in the mid-frequencies, but this may be a limitation of the measure at high frequencies.  It is also unclear how measures of OAE suppression relate to perception.  Psychoacoustics is an alternative approach to measure cochlear gain reduction.  Gain reduction has been estimated at 4 kHz using a forward masking paradigm.  This technique has not yet been used to explore cochlear gain reduction at frequencies below 4 kHz, the focus of this research project.  Young adults with normal hearing participated in this experiment.  Forward masking techniques were used to examine cochlear gain reduction at 1, 2, and 4 kHz.  Comparison of results across frequency and psychoacoustic methods will be discussed.

 

March 31, 2016

Fernando Llanos (Kluender lab)

Relative importance of experience with 1st-order versus 2nd-order statistics in perceptual and statistical learning of vowel categories

Native language experience profoundly shapes listeners’ perception of speech. It is commonly believed that 1st-order statistics (probability-density distributions) of speech sounds are the principal bases for experience-based perceptual organization. Alternatively, some have argued that 2nd-order statistics (covariance) capturing relationships between acoustic attributes may play a greater role. Here, both hypotheses were tested using rounded mid vowels based loosely upon Finland Swedish that were novel to native-English listeners. Stimuli for the 1st-order condition were vowel sounds synthesized to create Gaussian 2-dimensional distributions with centroids corresponding to an adult female talker. Stimuli for the 2nd-order condition included the female centroids, and values of F1 and F2 were systematically decreased or increased in a manner consistent with lengthened or shorted vocal tracts, respectively, perceptually spanning from adult male to child. This variation in F1 and F2  across vocal tract/talker was highly correlated in log frequency and psychoacoustically-scaled (ERB) space. Listeners completed an AXB task (no feedback) for pairs of stimuli that included the centroids and traversed the two vowel distributions prior to and following an 8 minute passive exposure to two distributions of vowel sounds from either of the two conditions. Results suggest that, relative to 1st-order, experience with 2nd-order statistics encourages changes in discrimination emblematic of categorical perception. This interpretation of participants’ perceptual behavior was supported by a series of simulations using a large corpus of naturally-produced English vowels. Unsupervised clustering algorithms were used to evaluate three models of statistical learning of minimal contrasts between English vowel pairs. The first two models employed only 1st-order statistics with assumptions of uniform [M1] or Gaussian [M2] distributions of vowels in an F1-F2 space. The third model [M3] employed 2nd-order statistics by encoding covariance between F1 and F2. The 1st-order Gaussian model [M2] performed better than a uniform distribution [M2] for six of seven minimal pairs. The 2nd-order model [M3] was significantly superior to both 1st-order models for every pair. Implications of these results for optimal perceptual and statistical learning of phonetic categories will be discussed.

 

April 7, 2016

Alli Norris (Au.D. student, SLHS)

Integrative Audiology Grand Rounds: Family Genetics in Audiology

This case presentation looks deeper into a patient who presented an in-depth genetic family history in the field of audiology. Topics that will be discussed include the important role of the audiologist in this particular case and similar cases, basic diagnostic data and outcomes as well as future plan of care for the patient. An open discussion is welcomed and encouraged for those who attend.

 

April 14, 2016

Brandon Coventry (BME, Bartlett lab)

Light gated neurons: Infrared neural stimulation as a tool for studies in basic neuroscience and clinical neuroprostheses

Electrical stimulation is a valuable tool for basic studies in neuroscience and clinical treatment of neuropathic disorders such as cochlear implants for hearing loss. While the cochlear implant is the most successful neuroprostheses, nonspecific activation of the cochlea are in part responsible for sub-optimal performance in the patient population. Infrared neural stimulation (INS) is a new stimulation modality which utilizes coherent infrared light to stimulate nerves and neurons. INS has been shown to be more spatially specific than electrical stimulation, does not produce electrical artifacts in recording paradigms, and is more directly translatable to the treatment of human neurological disorders as it does not require genetic modification of the target. However, the underlying mechanism of infrared induced excitability and relevant laser parameters required for stimulation is not well understood. In order to better understand these properties, rat sciatic nerves were excited across a continuum of near-infrared irradiation at varying power levels. Furthermore, finite element computational models were used to explore electric field generation across the nerve in response to laser excitation. These studies show that INS occurs across a multiplicity of wavelengths which can be exploited for the fine tuning of spatial activation at power levels that are well below tissue ablation thresholds. This talk will also discuss applications of INS for the study of auditory neural circuits.

 

April 21, 2016

Alex Francis, PhD (Francis lab)

Listening effort in constant and interrupted noise

Alexander L. Francis, Jennifer Schumaker, and Rongrong Zhang

Listening and speaking in noise is effortful, and even low levels of noise can be a significant source of psychological and physiological stress. Typical laboratory tests of speech perception in noise often present masked stimuli in trials separated by silence, and anecdotal reports suggest that many audiologists also prefer to interrupt clinical tests of the perception of speech in noise to allow clients to speak without noise present. Such practices are, in effect, starting and stopping an otherwise constant background noise multiple times in a short period of time in a manner that is not typically encountered outside the clinic or lab. While these interruptions might reduce listener stress by providing momentary respite from an aversive stimulus, such intermittent noise exposure might also increase stress by repeatedly inducing automatic physiological (orienting) responses to the noise onsets. In this talk we will present some preliminary results of an experiment contrasting these possibilities. Younger (age 18-36) and older (age 60+) listeners heard and repeated sentences presented in speech-shaped noise at 0 dB SNR while physiological responses linked to stress and arousal (skin conductance, heart rate, fingertip pulse amplitude, facial electromyography) were recorded. Two roughly 15-minute blocks of noise, each containing 36 unique sentences, were presented to each listener in counter-balanced order. In the interrupted noise condition the noise was silenced for 5 s shortly after each sentence while participants responded, while in the uninterrupted condition the noise continued unabated. Behavioral measures of listening task performance (proportion of key words repeated correctly) indicate a significant difference in performance between the two conditions as well as a significant difference due to age, but no interaction. We will also discuss preliminary analyses of some of the physiological data if time permits.

 

April 28, 2016

Chandan Suresh (PhD student, SLHS, Krishnan Lab)

Language-experience plasticity in neural representation of changes in pitch salience

                          

Neural representation of pitch-relevant information at the brainstem and cortical levels of processing is influenced by language experience. A well-known attribute of pitch is its salience. Brainstem frequency following responses and cortical pitch specific responses, recorded concurrently, were elicited by a pitch salience continuum spanning weak to strong pitch of a dynamic, iterated rippled noise pitch contour—homolog of a Mandarin tone. Our aims were to assess how language experience (Chinese, English) affects i) enhancement of neural activity associated with pitch salience at brainstem and cortical levels, ii) the presence of asymmetry in cortical pitch representation, and iii) patterns of relative changes in magnitude along the pitch salience continuum. Peak latency (Fz: Na, Pb, and Nb) was shorter in the Chinese than the English group across the continuum. Peak-to-peak amplitude (Fz: Na–Pb, Pb–Nb) of the Chinese group grew larger with increasing pitch salience, but an experience-dependent advantage was limited to the Na–Pb component. At temporal sites (T7/T8), the larger amplitude of the Chinese group across the continuum was both limited to the Na–Pb component and the right temporal site. At the brainstem level, F0 magnitude gets larger as you increase pitch salience, and it too reveals Chinese superiority. A direct comparison of cortical and brainstem responses for the Chinese group reveals different patterns of relative changes in magnitude along the pitch salience continuum. Such differences may point to a transformation in pitch processing at the cortical level presumably mediated by local sensory and/or extrasensory influence overlaid on the brainstem output