Skip navigation

Seminars in Hearing Research at Purdue

 

PAST: Abstracts

Talks in 2012-2013

[LILY G-401: Fall; BRNG 1255: Spring; 1030-1120am]

 

Link to Schedule

 

August 30, 2012

Kenneth Henry, PhD (Heinz Lab)

 

Diminished temporal coding with sensorineural hearing loss emerges in background noise

 

Behavioral studies in humans suggest that sensorineural hearing loss (SNHL) decreases sensitivity to the temporal structure of sound, but neurophysiological studies in mammals provide little evidence for diminished temporal coding. Here, we found that SNHL in chinchillas degrades peripheral temporal coding in background noise substantially more than in quiet. These results resolve discrepancies between previous studies and help explain why perceptual difficulties in hearing-impaired listeners often emerge in noisy situations.

 

 

September 6, 2012

James Felli, Peter Houghton, Ian Watson (Eli Lilly); David A. Colby (Purdue, Medicinal Chemistry)

The Music and Chemistry Project

Since 2011, Eli Lilly and Company has been experimenting with using musical representations of molecular structures to enhance its approach to screening compounds in a chemical space.

Currently, the first step in the methodology involves computing the nuclear magnetic resonance (NMR) spectrum for a molecule.  The NMR spectrum essentially classifies Hydrogen atoms according to the type of environment in which they occur (e.g., a Hydrogen atom in an electron withdrawing environment will be shifted differently from a Hydrogen atom attached to an electron donating atom).  Essentially, this is a fine grained atom typing mechanism.  Many computational techniques are driven by their atom typing underpinning, so using classification according to NMR type is a novel and entirely valid way of looking at molecules.  The second step is the transformation of the NMR spectrum into “Music Space.”  This is accomplished via a monotonic transformation with adjustments to ensure that the diversity of atomic environments maps reasonably to the customary ranges of music.  At this stage, we believe that we stand on firm ground: we have a nice mapping from a detailed representation of chemical structure into a very different space.

The leap of faith – and the essential assumption underlying the approach – is that transformations made to the music in “Music Space” can be back translated in a meaningful way to a chemical structure.  There are three fundamental and unanswered questions here: 

1)         Can transformations that enhance "internal consistency" in “Music Space,” as expressed by musicians, serve as creative prompts when translated back into the space of molecular structure, thereby facilitating the kind of lateral thinking required to open new vistas of thinking regarding Chemistry?

2)         Can transformations that enhance the "internal consistency" in “Music Space,” as expressed by musicians, provide insights when translated back into the space of molecular structure that improve our understanding of molecular interactions and behaviors?

3)         Can transformations that enhance the "internal consistency" in “Music Space,” as expressed by musicians, be extended to usefully characterize interaction fingerprints between ligands and proteins?

At this point, we have no real guidance available as to whether or not this approach of mapping chemical structure into “Music Space” and engaging musicians as creative catalysts is a good and/or workable idea.  There is no reason to assume that the effort will bear fruit; there is no reason to imagine that it will not.  One can make compelling arguments both for and against the concept.  Realistically, the likelihood of success seems less than a coin toss…

…but if it works, it would be an exhilarating occasion!

 

September 13, 2012

Ed Bartlett, PhD

 

A computational model of inferior colliculus responses to amplitude modulated sounds in young and aged rats

 

Cal F. Rabang, Aravindakshan Parthasarathy, Yamini Venkataraman, Zachery L. Fisher, Stephanie M. Gardner and Edward L. Bartlett

 

The inferior colliculus (IC) receives ascending excitatory and inhibitory inputs from multiple sources, but how these auditory inputs converge to generate IC spike patterns is poorly understood. Simulating patterns of in vivo spike train data from cellular and synaptic models creates a powerful framework to identify factors that contribute to changes in IC responses, such as those resulting in age-related loss of temporal processing. A conductance-based single neuron IC model was constructed, and its responses were compared to those observed during in vivo IC recordings in rats. IC spike patterns were evoked using amplitude-modulated (AM) tone or noise carriers at 20-40 dB above threshold and were classified as low-pass, band-pass, band-reject, all-pass, or complex based on their rate modulation transfer function (rMTF) tuning shape. Their temporal modulation transfer functions (tMTFs) were also measured. These spike patterns provided experimental measures of rate, vector strength and firing pattern for comparison with model outputs. Patterns of excitatory and inhibitory synaptic convergence to IC neurons were based on anatomical studies and generalized input tuning for modulation frequency. Responses of modeled ascending inputs were derived from experimental data from previous studies. Adapting and sustained IC intrinsic models were created, with adaptation created via calcium-activated potassium currents. Short-term synaptic plasticity was incorporated into the model in the form of synaptic depression, which was shown to have a substantial effect on the magnitude and time course of the IC response. The most commonly observed IC response subtypes were recreated and enabled dissociation of inherited response properties from those that were generated in IC. Furthermore, the model was used to make predictions about the consequences of reduction in inhibition for age-related loss of temporal processing due to a reduction in GABA seen anatomically with age.

 

 

 

 

 

 

September 20, 2012

Saradha Ananthakrishnan, AuD (PhD student, SLHS, Krishnan lab)

 

Subcortical Pitch Processing & Speech Encoding in the Normal and Impaired Auditory Systems

 

Speech perception deficits in hearing impaired (HI) listeners have been shown to be associated with a reduced ability to use temporal fine structure (TFS) information as compared to normal hearing listeners. A possible reason for this reduced sensitivity may be decreased phase locking ability, although experiments on animals with hearing loss have yielded mixed results. Here we examine whether neural representation, as reflected in the scalp recorded brainstem frequency following response (FFR), of envelope and/or the TFS is degraded in HI impaired listeners, compared to normal hearing listeners.  FFRs were recorded from normal hearing and HI listeners with mild to moderate sensorineural hearing loss using a synthetic vowel /u/, at equal dB SPLs and dB SL. Speech in noise testing provided a behavioral complement to the FFR data. Pitch-related neural periodicity was computed from FFRs by examining response autocorrelation functions (ACFs). In addition, spectral analysis was performed by computing the FFTs of the FFRs to identify the dominant frequency in the responses. FFRs to these stimuli in the normal hearing listeners suggest robust neural phase locking to both envelope and TFS. In contrast, neural phase locking to envelope and TFS were severely degraded in the HI listeners. Other questions being addressed include the relationship (if any) between the brainstem data and age, audiometric hearing thresholds and/or results from speech-in-noise testing.

 

 

September 27, 2012

Jeff Lucas, PhD (BIO)

 

Evoked potentials and avian auditory processing

 

Jeff Lucas, Ken Henry & Megan Gall

 

We have been using Ravi Krishnan's equipment to record the auditory processing of species-relevant sounds in a variety of species of birds.  Our focus is an analysis of variability in auditory processing at several levels: species, sex, individual, within-individual.  I will cover several examples of results we have collected over the last few years that cover several of these levels of analysis.

 

 

October 4, 2012

POSTPONED - illness

 

 

October 11, 2012

Ching-Chih Wu, (PhD student, ECE, Luo lab)

 

Current steering and electrode spanning with partial tripolar stimulation mode in cochlear implants

 

Cochlear implants (CIs) partially restore hearing sensation to profoundly deaf people by electrically stimulating the surviving auditory neurons. However, current CI performance is limited in challenging listening tasks such as speech recognition in noise and music perception. A possible reason is the lack of fine spectral details in CI, due to the small number of implanted electrodes and the large current spread of electric stimulation. Here, we propose to introduce current steering and electrode spanning to more focused partial tripolar (pTP) mode for CI stimulation. These novel stimulation modes may provide additional distinctive frequency channels for better coding of spectral fine structures and help address the practical issues of pTP-mode speech processing strategies caused by the cochlear dead region and defective electrode contact. Loudness and pitch perception with steered and spanned pTP modes were simulated using a computational model of CI stimulation and were tested in six adult CI users on three electrodes. Human psychophysical data verified the feasibility and efficacy of the proposed stimulation modes in eliciting salient pitch changes for CI users, which somewhat agreed with the modeling results based on the center of gravity of neural excitation pattern. However, there was a large inter-subject variability in the psychophysical data, which call for future studies of the effects of neural survival and electrode-neuron distance using the computational model, as well as measures of the actual neural excitation patterns in individual subjects using the ECAP or EFI technique.

 

 

October 18, 2012

Elin Roverud, AuD (PhD student, SLHS, Strickland Lab)

 

Possible psychoacoustic measures of the medial olivocochlear reflex

 

Psychoacoustic techniques can be used to measure peripheral auditory properties (e.g., compression and tuning of auditory filters) in human listeners.  Interpretation of psychoacoustic data depends on a set of underlying assumptions.  One common assumption is that cochlear filter properties remain static as they are being measured regardless of preceding sound stimulation.  However, this assumption has been called into question in light of physiological studies of the medial olivocochlear reflex (MOCR).  The MOCR is a sluggish, sound-evoked reflex that can change the cochlear filter over the course of sound.  In our lab, we explore the possible influence of the MOCR in psychoacoustic results.  This research has implications for the design and interpretation of psychoacoustic experiments and thus can impact our understanding of the human auditory system.

 

 

October 25, 2012

Andrew S Koivuniemi (MD/PhD student, BME, Otto Lab)

 

Optimized Parameters to Electrically Activate the Auditory Brain

 

Intracortical microstimulation (ICMS) of primary sensory regions of the brain uses brief pulses of electrical energy delivered through microscopic electrode in order to artificially activate networks of neurons creating sensory illusions in the stimulated individual. This ability to generate artificial sensations makes ICMS a compelling platform for the development of sensory prostheses for the blind, paralyzed and deaf. Unfortunately, fundamental questions regarding the optimal stimulation parameters have not been addressed in a systematic and behaviorally relevant way leading researchers to select stimuli based on ad hoc assumptions and arbitrary historical standards.

The purpose of this presentation is to summarize a series of behavior experiments performed in rats trained to detect auditory and electrical stimuli. These experiments were designed to answer the following three questions. 1) What is the best cortical depth for stimulation? 2) What is the optimal stimulation waveform? 3) What is the maximal useful stimulus pulse rate? The present results suggest the following answers: 1) cortical layers V&IV, 2) biphasic, charge balanced, symmetric, cathode leading pulses with of duration of ~ 100 microseconds per phase, and 3) 80 pulses-per-second.

 

November 1, 2012

Josh Alexander, PhD (SLHS)

 

The Importance of Information vs. Audibility for Hearing Aid Processing Strategies

 

Research in my lab, the Experimental Amplification Research (EAR) lab, focuses on testing existing signal processing strategies for hearing aids as well as developing new ones.  Understanding what the information in the speech signal is and understanding how it is affected by the signal processing strategies and by hearing loss will help audiologists make more judicious decisions about how to customize features in hearing aids for maximum benefit.  We are now working towards a perceptual model based on statistics in auditory nerve fiber firing patterns and information theory to describe some of our findings.  A future talk will describe the model; this talk will describe the findings we wish to explain.

 

Two signal processing strategies will be discussed.  Wide dynamic range compression (WDRC) is a ubiquitous feature in hearing aids that is used to repackage information in the amplitude domain for the explicit purposes of enhancing signal audibility.  Data will be presented that demonstrates a disassociation between audibility and speech recognition across a variety of WDRC settings.  Nonlinear frequency compression (NFC) attempts to increase the speech information in a hearing aid user’s audible bandwidth by functionally decimating the input bandwidth above a certain start frequency.  Increasing the amount of frequency compression increases the amount of input bandwidth moved into the audible frequency range, but reduces spectral resolution.  Lowering the start frequency can help keep the amount of frequency compression low, but this distorts a greater input range.  Data that speaks to these tradeoffs will be discussed.

 

November 8, 2012

Alex Francis, PhD (SLHS)

 

Factors affecting listeners' use of multiple acoustic cues in speech

 

A typical speech signal contains a wide variety of linguistically meaningful acoustic properties that, in combination and individually, signal the presence of specific speech sounds to native listeners. In general, although a given phonological contrast may be distinguishable according to multiple such acoustic correlates, native listeners will tend to rely on one or a few of these (treating them as primary cues), and making use of the other secondary cues only to a lesser extent or when the primary cue is rendered uninformative. Failure to make use of primary cues, or inappropriately depending on secondary ones, may lead to decreased intelligibility and/or increased listening effort.

 

In this talk I will discuss three studies currently in progress that investigate factors that may lead to ineffective cue use in different groups of listeners. In the first case, Mengxi Lin and I are studying native Mandarin learners of English to determine whether prior (native language) experience with lexical tones affects learners' use of fundamental frequency in English, and, if so, whether this has consequences for acquisition of English more generally. In the second, Fernando Llanos and I are studying perception of stop consonant voicing by Spanish learners of English in order to determine why these listeners tend to exhibit an over-reliance on acoustic cues in English that seem to play only a marginal role in both Spanish and English. Finally, if time permits, I will present some very preliminary data from a project with Josh Alexander in which we are looking at how age and hearing impairment affect cue use in native English listeners.  

 

 

November 15, 2012

Cal Rabang (PhD student, BME, Bartlett lab)

 

Modeling cellular mechanisms underlying representations of temporal modulation in the auditory midbrain and thalamus

 

In animals and humans, temporal processing of acoustic features is critical for perception of species-specific sounds and speech. Time varying sound features are often represented in the early auditory pathway as stimulus-synchronized patterns of neural activity. These representations undergo transformations as they pass from the inferior colliculus (IC) to auditory cortex via the medial geniculate body (MGB).  IC responses preserve their synchronized inputs but show much greater firing rate modulation than many of their inputs. How the inputs to IC converge to generate tuned rate and temporal coded responses patterns is poorly understood. In the MGB, two different responses are observed: Stimulus-synchronized responses faithfully preserve the temporal coding from its afferent inputs, and Non-synchronized responses, which are not phase locked to the inputs, represent changes in temporal modulation by a rate code. The cellular mechanisms that produce these segregated responses are also poorly understood.

 

Single compartment neuron models of the IC and MGB were created using MATLAB and NEURON software. The most commonly observed IC response subtypes were recreated, enabling dissociation of inherited response properties from those that were generated in IC (Rabang et al., 2012).

 

The MGB model investigated the role of two differing populations of excitatory inputs, feedforward inhibition, and synaptic plasticity on the generation of either synchronized or non-synchronized responses (Rabang and Bartlett 2011). Patch clamp recordings from MGB neurons verified synaptic model parameters. Both models recreated in vivo responses and made predictions about the role of inhibition in normal hearing and age-related hearing loss.

 

 

November 29, 2012

Kenneth Henry, PhD (Heinz Lab)

 

Auditory processing of temporal structure in birds and mammals

 

Temporal acoustic cues are an important feature for discrimination of communication signals in many animals including most songbird species and humans. Following a short overview of how temporal information is encoded in the cochlea, I will describe a comparative study focused on how temporal processing mechanisms have evolved in different songbird species. Species differences in temporal processing are inversely related to cochlear frequency tuning and appear to improve the representation of communication signals in species-specific habitats. In the second part of the talk, I will describe work focused on the effects of cochlear hearing loss on neural coding of temporal information in mammals. In general, these studies show that deficits in temporal coding with hearing loss emerge under realistic listening conditions, that is, in background noise and in response to broadband signals. The implications of these results to speech perception in humans are discussed.

 

 

December 6, 2012

Vidhya Munnamalai, PhD (Fekete Lab now; this work from Bermingham-McDonogh Lab, University of Washington)

      

A link between Notch and Fgf20 in prosensory formation in mammalian cochlear development

 

During development, fibroblast growth factors (FGFs) are required for inner ear development as well as hair cell formation in the mammalian cochlea and thus make attractive therapeutic candidates for the regeneration of sensory cells. Previous findings showed that Fgfr1 conditional knock out mice exhibited hair cell and support cell formation defects. Immunoblocking with Fgf20 antibody in vitro produced a similar phenotype. While hair cell differentiation in mice starts at embryonic day (E)14.5, beginning with the inner hair cells, Fgf20 expression precedes hair cell differentiation at E13.5 in the cochlea. This suggests a potential role for Fgf20 in priming the sensory epithelium for hair cell formation. Treatment of explants with a gamma-secretase inhibitor, DAPT, decreased Fgf20 mRNA, suggesting that Notch is upstream of Fgf20. Notch signaling also plays an early role in prosensory formation during cochlear development. In this report we show that during development, Notch-mediated regulation of prosensory formation in the cochlea occurs via Fgf20. Addition of exogenous FGF20 compensated for the block in Notch signaling and rescued Sox2, a prosensory marker, and Gfi1, an early hair cell marker in explant cultures. We hypothesized that Fgf20 plays a role in specification, amplification, or maintenance of Sox2 expression in prosensory progenitors of the developing mammalian cochlea.

 

 

December 13, 2012

Keith R. Kluender, PhD (Professor and Dept. Head, SLHS)

      

Speech perception as efficient coding

 

Fundamental principles that govern all perception, from transduction to cortex, are shaping our understanding of perception of speech and other familiar sounds. Here, ecological and sensorineural considerations are proposed in support of an information-theoretical approach to speech perception. Optimization of information transmission and efficient coding are emphasized in explanations of classic characteristics of speech perception, including: perceptual resilience to signal degradation; variability across changes in listening environment, rate, and talker; categorical perception; and, word segmentation. Experimental findings will be used to illustrate how a series of like processes operate upon the acoustic signal with increasing levels of sophistication on the way from waveforms to words. Common to these processes are ways that perceptual systems absorb predictable characteristics of the soundscape, from temporally local (adaptation) to extended periods (learning), and sensitivity to new information is enhanced.

 

January 10, 2013

Erica Hegland (PhD student, SLHS, Strickland Lab)

 

Suppression and Enhancement Estimated from Growth of Masking Functions

 

The auditory system is especially good at detecting changes in sounds.  This is evident and measureable in a psychoacoustic phenomenon known as enhancement.  In enhancement, a target frequency stands out when it is preceded by a notched harmonic complex.  Psychophysical and physiological studies have examined whether enhancement may be due to a reduction is suppression by the medial olivocochlear reflex.  Because this is a rather complicated concept, the results have been mixed.  In this talk I will discuss this hypothesis, and present results of a psychophysical study in which enhancement and a decrease in suppression are measured from the same data.

 

January 17, 2013

Xin Luo, PhD (SLHS)

 

Melodic contour and interval perception with cochlear implants

 

Music perception remains challenging for cochlear implant (CI) users, due to poorly encoded pitch cues with CI. Accurate perception of the direction and size of pitch change between musical notes (i.e., the melodic contour and interval) is essential for melody recognition. In this talk, I will talk about two recent studies on melodic contour and interval perception with CI. The first study tested the hypothesis that CI users’ melodic contour identification and familiar melody recognition may be enhanced with consistent pitch and loudness changes between musical notes. The results showed that adding loudness changes in the same direction as pitch changes significantly improved the identification of melodic contours with 1-semitone intervals, but did not lead to better recognition of familiar melodies typically with larger intervals and pitch ranges. The second study systematically investigated pitch interval discrimination, interval size estimation, as well as rating and adjustment of musical intervals in familiar melodies. The results revealed that for both CI users and normal-hearing listeners, pitch interval and direction had significant effects on interval discrimination thresholds, while pitch range and direction significantly interacted with each other for interval size estimation. The pattern of interval size estimation may be explained by music listening expectancies rather than interval discrimination thresholds. Subjects with better pitch interval discrimination were also more consistent in the rating and adjustment of musical intervals in familiar melodies.

 

 

January 24, 2013

Kevin Otto, PhD (BIO/BME)

 

On the Relationship of Chronic Microstimulation and Neural-Tissue Interfacial Quality

 

There is a fundamental obstacle that needs to be addressed in the design and use of penetrating microdevices for neuroprostheses.   That obstacle is the tissue response to the device insertion and the subsequent reliability of the device-tissue interface for longitudinal recording or stimulation.  Our long-term goal is to develop multi-channel interfaces with central nervous tissue for clinical therapy.  In particular, our objective focuses on the effects of the reactive tissue response on the efficacy of interface-driven behavior.  We are pursuing two simultaneous experiments: first we are investigating the effect of the device-tissue interfacial quality on the psychophysical threshold for sensory cortical microstimulation.  Chronic implantation of neural implants is followed by a reactive tissue response that both functionally isolates the electrode from the tissue as well as triggers neuronal apoptosis and migration.  We measure these functional changes to determine their correlation with and potential causation on the efficacy of a cortical auditory prosthesis.  Second, we are studying how different microstimulation parameters may exacerbate the reactive tissue response, potentially affecting the psychophysical threshold and the dynamic range for sensation.  To this end, psychophysical experiments are being performed using multi-channel cortical implants in the auditory cortex of rats.  Here we report the effects of electrode-tissue impedance, cortical depth, days post-implant, and waveform asymmetry on the psychophysical threshold of auditory cortical microstimulation.  We expect that these data will further enable design and development of neuroprosthetic interfaces for many potential therapeutic applications.

 

 

January 31, 2013

Kelly Ronald (PhD student, BIO, Lucas lab)

 

Differences in frequency sensitivity and temporal resolution in a song bird, Molothrus ater

 

There is substantial evidence for seasonal variation in the vocal signals of many songbird species; interestingly, recent studies have also shown that there is seasonal variation in the auditory processing of acoustic signals.  Male brown-headed cowbirds (Molothrus ater) have been used in multiple studies of mate-choice and endocrinology and, as a result, we now know much about the seasonal variation in the behavior and hormones in this species.  Nevertheless, we are still unaware of how the auditory processing of male cowbirds may vary due to changes in physiological condition. Here we discuss the behavioral and physiological changes during two critical periods in the male cowbird's annual cycle: breeding and molting, and demonstrate how testosterone and food availability within these periods affects the processing of auditory stimuli, respectively.  Results will be discussed in the context of sexual selection theory and the implications of seasonal changes in auditory processing will be reviewed. 

 

 

February 7, 2013

Aravind Parthasarathy, PhD (Bartlett lab)

           

Age-related changes in auditory temporal processing assessed using frequency-following responses

 

Our knowledge of age-related changes in auditory processing in the central auditory system is limited, unlike the changes in the peripheral hearing organs which are more extensively studied. These changes in the central auditory pathway primarily manifest as changes in processing of temporally complex stimuli. This study aims to understand age-related changes in temporal processing in a rodent model system using non-invasive auditory evoked potentials. Frequency following responses (FFRs) were recorded from young and aged Fischer-344 rats using subdermal needle electrodes, to sinusoidally amplitude modulated (sAM) tones, changing in sound level of presentation, modulation frequency and modulation depth. FFRs were also obtained for sAM tones in the presence of other sAM or noise maskers overlapping in time. The evoked potentials indicate that responses to these stimuli are similar between the young and aged animals for slower modulation frequencies and high modulation depths. However, significant deficits begin to emerge with age under lower modulation depths and faster modulation frequencies, even when changes in hearing sensitivity are compensated. Responses in the presence of maskers also suggest a greater degree of relative masking in the young compared to the aged, which increases with amount of stimulus overlap. These results indicate that age-related temporal processing deficits become apparent only under degraded listening conditions, and standard diagnostic testing methods in quiet may not be sufficient for fully understanding age-related hearing changes that could affect quality of life. This work also has implications for constraining the responsible cellular and network mechanisms as well as rapid testing of therapeutic interventions including auditory training or pharmacological treatments.  

 

 

February 14, 2013

Michelle Stoller (PhD student, BIO, Fekete lab)

           

Influencing Inner Ear Cellular Fate with Gene Transfer Tools Containing miRNAs

 

Sensorineuronal hearing loss is the leading type of hearing impairment in the population.  While this type of loss can result from the destruction of multiple different cell types, the majority of cases are due to damage sustained by the hair cells (HCs) in the inner ear.   The development of these specialized sensory cells relies on the master HC gene Atoh1.  Atoh1 is necessary and sufficient for hair cell formation, but current research has also reinforced the importance of other factors in HC development and differentiation called miRNAs.  One family of miRNAs in particular has been the focus of much research in the hearing field: the miR-183 family that includes miR-182, -183 and -96.  Mutations in the seed region of miR-96 result in a lack of fully developed HCs in mice, and underlie profound hearing loss in both mice and humans. Knockdowns of this family of miRNAs in the zebrafish resulted in a loss of HCs.  Together, these findings lead us to hypothesize that both Atoh1-mediated transcriptional activation and miRNA-mediated translational repression may function synergistically during hair cell development, maturation and, possibly, regeneration. To test these ideas in chicken embryos and adult mice, we created several different viral vectors capable of carrying and delivering the combination of Atoh1 and the miRNA-183 family to the inner ear.  We have also generated vectors carrying another miRNA of interest, miR-9, for delivery to the inner ear of embryonic chickens. Recent results suggest that miRNAs on their own and with Atoh1 can produce ectopic and/or extra HCs.  Future research will focus on using these vectors as therapeutic agents to promote HC regeneration in a mouse model of deafness.

 

 

February 21, 2013

 

ARO recap & discussion

 

 

 

 

 

February 28, 2013

Ann Hickox, PhD (Heinz Lab)

 

Noise-induced cochlear nerve degeneration: hyperacusis and tinnitus in the absence of hair cell damage?

Perceptual abnormalities such as hyperacusis and tinnitus often occur following acoustic overexposure. Although such exposure can also result in permanent threshold elevation, some individuals with noise-induced hyperacusis or tinnitus show clinically normal thresholds. Recent work in animals has shown that noise exposure can cause permanent degeneration of the cochlear nerve despite complete threshold recovery and lack of hair cell damage (Kujawa and Liberman, 2009; Lin et al., 2011). Here, we ask whether this noise-induced primary neuronal degeneration results in abnormal auditory behavior, based on the acoustic startle response and prepulse inhibition (PPI) of startle. Responses to tones and to broadband noise were measured in mice exposed either to a neuropathic exposure causing primary neuronal degeneration, or to a lower intensity, non-neuropathic noise, and in unexposed controls. Mice with cochlear neuronal loss displayed hyper-responsivity to sound, as evidenced by lower startle thresholds and enhanced PPI, while exposed mice without neuronal loss showed control-like responses. Despite significantly reduced cochlear nerve response, seen as reduced wave I of the auditory brainstem response, later peaks were unchanged or enhanced, suggesting neural hyperactivity in the auditory brainstem that could underlie the abnormal behavior on the startle tests. The results suggest a role for cochlear primary neuronal degeneration in central neural excitability and, by extension, in the generation of tinnitus and hyperacusis.

 

March 7, 2013

Evelyn Davies-Venn, PhD (Strickland Lab)

 

The relationship between different measures of spectral resolution and speech recognition: Normal hearing vs. hearing loss

 

It is widely accepted that at high levels basilar membrane responses become more linear and frequency tuning broadens. However, recent data suggest that narrowband measures of auditory filter bandwidths may exaggerate the functional effect of high levels on listeners’ spectral resolution abilities. The functional effects of presentation level on frequency tuning may differ for narrowband measures versus broadband measures of spectral resolution. This study used a within-subject, repeated-measures design to evaluate the relationship between speech recognition and narrowband versus broadband measures of spectral resolution. Listeners with normal hearing and hearing loss were tested at different sensation levels. Auditory filter bandwidths (ERB) were calculated from notched-noise masked threshold data (Stone et al., 1992). Broadband spectral resolution was measured using the spectral modulation detection task (Litvak et al, 2007) and the phase-reversal spectral ripple density discrimination task (Won et al, 2007).

 

Results suggest that the effect of level was much exaggerated with narrowband measures of auditory filter bandwidths compared to the broadband measures of spectral resolution. The difference between the measures of spectral resolution was greatest for the higher level test signal and this effect significantly correlated with perceptual measures of speech recognition at multiple levels using linear and wide dynamic range compression amplification conditions.

 

March 28, 2013

Oliver Regele, (MS student, BME, Otto Lab)

 

The perceptual salience of amplitude modulated cortical stimulation: peak equivalence, or RMS equivalence?

 

 

April 4, 2013

Glenis Long, PhD (Speech-Language-Hearing Program, City University of New York)

 

Using continuously sweeping tones to evaluate cochlear function using Otoacoustic Emissions.

 

Shortly after leaving Purdue, I worked together with Carrick Talmadge to develop a more efficient procedure for measuring otoacoustic emissions based on the models of OAE generation we developed at Purdue, which reveal that OAE can have multiple sources.  I will talk about the development of this procedure and what it has permitted us to learn about; development of OAE, impact of negative middle ear pressure, middle ear reflex and efferent activation. 

 

 

April 11, 2013

Kaidi Zhang (PhD student, BIO, Fekete lab)

 

Exploring microRNA expression and function in chicken basilar papilla

 

MicroRNAs (miRNA) are small single-stranded RNAs that mediate post-transcriptional repression of gene expression by binding with target mRNA 3’UTRs. Mice with conditional knockouts of Dicer1 in embryonic inner ear have severe neurosensory defects and hair cell degeneration in three studies, revealing the importance of miRNAs in hair cell maturation and maintenance. The miR-183 family, including miR-183, 96 and 182, is expressed in otic, optic and olfactory epithelia in zebrafish and mice. Mutations of miR-96 underlie inherited deafness in mice and humans. However, the expression pattern of miR-183 family is unknown in chicken inner ear. We found a consistent radial gradient of the miR-183 family in chicken basilar papilla hair cells, with higher expression in tall hair cells than short hair cells. The expression reaches the highest point at embryonic day 14. Therefore, it is hypothesized that miR-183 family plays an important role in the establishment of the radial gradient, i.e., the differentiation of tall versus short hair cells.

 

 

April 18, 2013

David Axe (PhD student, BME, Heinz lab)

 

Effects of inner hair cell damage on temporal coding

 

It is widely believed that the neural patterns of temporal coding within the auditory periphery and CNS change following cochlear hearing loss, and a number of recent studies have aimed to more fully understand and characterize these changes. In these studies, noise exposure has been a common method for inducing hearing loss in animal models. Unfortunately, its effects are nonspecific, affecting both inner and outer hair cells as well as the surrounding tissues. Because of this mixed hair-cell damage it has been difficult to tease apart the specific effects that each of these pathologies has on temporal coding. In the present study, we used the chemotherapy drug carboplatin to induce inner hair cell (IHC) specific lesions in the cochleae of chinchillas. The goals of our study are to use acoustically evoked potentials and acute single fiber recordings from the auditory nerve, in parallel with computational modeling, to investigate the effects of IHC damage on temporal coding. Preliminary findings in carboplatin exposed chinchillas, which show near-normal ABR thresholds, showed a decrease in ABR amplitudes at high sound levels, as well as a decrease in the strength of temporal coding in frequency following responses.

 

 

April 25, 2013

Alejandro Velez (post-doc, BIO, Lucas lab)

 

Uncovering the evolution of auditory processing mechanisms: Insights from frogs and songbirds

 

Animal vocal communication requires a signaler, a signal that propagates through the environment, and a receiver. While several studies have shown the role of developmental, morphological, and ecological factors shaping the design of acoustic communication signals, we know much less about the factors that shape signal processing mechanisms in the receivers. Using frogs and songbirds as model systems, I have studied potential factors affecting the evolution of auditory processing mechanisms at behavioral and physiological levels. In this seminar, I will examine how background noise, heterospecific signals, and vocal complexity may shape signal processing mechanisms.

 

 

May 2, 2013

Krista Ashmore (AuD student, SLHS, Luo lab)

 

A cross-language study of context effects on tone recognition

 

Previous studies with Chinese normal-hearing (NH) listeners have found a contrastive context effect on Mandarin tone recognition, with more low-frequency responses in a high-frequency context and vice versa. Such tone normalization may help listeners better recognize tones produced by different talkers with various fundamental frequencies. In our recent study, English NH listeners and cochlear implant (CI) users identified flat and rising pitch contours simulating Mandarin Tones 1 and 2, with or without context, to further test the generality and origin of tone normalization. A similar contrastive context effect on pitch contour identification was found in English NH listeners and CI users using their clinical processors. Thus, the previously reported tone normalization may be a result of general central pitch processing and may occur even with non-speech pitch contours presented to English CI users who have neither tonal language experience nor fine-structure pitch cues. In a follow-up study, Chinese NH listeners identified non-speech pitch contours as well as Mandarin Chinese tones with or without context, and their results were compared with those of English NH listeners to test the contribution of tonal language experience to tone normalization. The results did show that tonal language experience of Chinese NH listeners led to significantly stronger context effects than those in English NH listeners.