Skip navigation

Seminars in Hearing Research at Purdue

 

PAST: Abstracts

Talks in 2013-2014

[LILY G-401: Fall; UNIV 317: Spring; 1030-1120am]

 

Link to Schedule

 

 

August 29, 2013

Mike Heinz, PhD (SLHS and BME)

 

Modeling disrupted tonotopicity of temporal coding following sensorineural hearing loss

Perceptual studies suggest that sensorineural hearing loss (SNHL) affects neural coding of temporal fine structure (TFS) more than envelope (ENV). Although the “quantity” of TFS coding is degraded only in background noise, Wiener-kernel analyses suggest SNHL disrupts tonotopicity (i.e., the “quality”) of TFS coding for complex sounds more than ENV coding. Specifically, auditory-nerve (AN) fibers in noise-exposed chinchillas can have their dominant TFS component located within their tuning-curve tail (i.e., the wrong place) while their ENV response remains centered at CF. Here, the ability of a AN model (Zilany and Bruce, 2007) to replicate this dissociation between TFS and ENV tonotopicity was evaluated. By varying the degree of outer- and inner-hair-cell damage, hypothesized factors such as hypersensitive tails and tip-to-tail ratio were evaluated. The model predicted the main trends in our physiological data: 1) no loss of tonotopicity for lower CFs without a clear tip/tail distinction, 2) more easily disrupted TFS tonotopicity than ENV (without requiring hypersensitive tails), and 3) disruption of both TFS and ENV tonotopicity for severely degraded tips. This computational approach allows exploration of the interaction between tip-tail ratio and phase-locking roll-off, and could be used to explore whether amplification strategies can restore cochlear tonotopicity. Supported by NIH grants R01-DC009838 and F32-DC012236.

 

September 5, 2013

Beth Strickland, PhD (SLHS) and Kate Geisen (SLHS - AuD)

 

Comparing gain reduction in simultaneous and forward masking

 

Our line of research examines the effects of preceding sound on behavioral estimates thought to reflect the active process in the cochlea.  In our original task using simultaneous masking, presenting preceding sound made it easier to hear a signal.  We were able to model these results as a reduction in the gain of the active process by the preceding sound.  We then moved to a forward masking task to remove effects of suppression.  In this task, presenting the preceding sound makes the signal harder to hear.  While this can still be modeled as gain reduction, there are other competing hypotheses.  In the present research, we measured simultaneous masking and forward masking in the same listeners.  Results show that the amount of gain reduction is correlated across the measures, supporting the hypothesis of gain reduction by preceding sound.  Research supported by NIH(NIDCD) R01-DC008327.

 

 

September 12, 2013

Ed Bartlett, PhD (BIO and BME)

 

Short Stories about the Non-lemniscal thalamus: Computational, Physiological and Anatomical Findings That Distinguish MGBm Processing

Edward Bartlett, Cal Rabang, Yamini Venkataraman, Stephanie Gardner Purdue University

BACKGROUND: The medial region of the medial geniculate body (MGBm), broadly defined to include the medial division of MGB, the suprageniculate and the posterior intralaminar nuclei, surrounds the ventral (MGBv) and dorsal (MGBd) divisions of the MGB on their ventral and medial aspects. MGBm is the most heterogeneous region of the MGB, in terms of connectivity, anatomy and physiology. Unlike MGBv and MGBd, MGBm projections mainly avoid middle cortical layers or they project to subcortical regions, where they have been shown to be critical for the formation of auditory-based associative memories. Here some salient differences in anatomy and physiology will be reviewed, along with new computational and anatomical data demonstrating how those differences are likely to influence auditory processing.

METHODS: Single compartment models of MGBm neurons are created using NEURON software. These models incorporated physiological data obtained from rat brain slice studies, including membrane properties, excitatory synaptic properties, and inhibitory synaptic properties. In addition, the distributions of ascending axon terminals from the IC are estimated using vesicular glutamate transporter 2 (vGluT2) immunohistochemistry.

RESULTS: Physiological results have shown that MGBm neurons often lack a low-threshold calcium current, a hallmark of standard thalamocortical neurons. In addition, many of these neurons have an apamin-sensitive calcium-activated potassium channel that causes MGBm neurons to adapt to sustained current injections. These properties are incorporated to compare model responses to collicular inputs in MGBm versus MGBv neurons.

CONCLUSIONS: Many MGBm neurons should not be grouped with MGBv and MGBd neurons when considering their physiology or their participation in thalamocortical sensory processing

 

September 19, 2013

Xin Luo, PhD (SLHS)

 

Processing of talker variation in Mandarin speech by cochlear implant children with bimodal fitting

 

Mandarin tones are characterized by different pitch levels and contours.  Given limited access to pitch cues, cochlear implant (CI) users can only achieve moderate levels of tone recognition.  Different talkers have different voice pitch ranges and variable tone productions.  This study tested whether the use of a hearing aid in the non-implanted ear (i.e., bimodal fitting) may help Mandarin-speaking CI children better handle talker variations in tone and vowel recognition.  The first experiment found that tone recognition was significantly better with bimodal fitting than with CI alone.  Stimuli mixed across talkers yielded significantly poorer tone recognition than those blocked by talker with either bimodal fitting or CI alone.  In contrast, there were no bimodal benefits or talker-variability effects for vowel recognition.  The second experiment found that CI children exhibited contrastive context effects on tone recognition with bimodal fitting but not with CI alone.  Residual acoustic hearing may allow CI children to perform talker normalization in tone recognition using contextual pitch cues.

 

 

September 26, 2013

Aravind Parthasarathy, PhD (BIO)

 

Relationship between Frequency Following Responses and Other Measures of Auditory Function in an Animal Model of Aging

                      

Auditory evoked potentials provide a non-invasive neurophysiological measure of auditory processing. Auditory brainstem responses (ABRs) to brief stimuli are the predominantly used clinical measure to assess auditory function. However frequency-following responses (FFRs) to longer-duration stimuli are rapidly gaining prominence as a measure of complex sound processing in the brainstem and midbrain. In spite of numerous studies reporting changes in the FFRs under pathological conditions, including aging, the relationships between the FFRs and the ABRs are not clearly understood. Furthermore, the underlying neural basis of the FFRs, including the generators and types of neuronal responses, are also not well understood. In this study the relationships between ABR, FFR and neuronal responses are explored in a rodent model of aging. ABRs to click stimuli and FFRs to sinusoidally amplitude modulated noise were measured in young (3-6 months) and aged (22-25 months) Fischer-344 rats. Responses from neurons of the inferior colliculus (IC), a known generator of the evoked potentials, were measured to similar stimuli. Age-related differences were observed in all these measures, primarily as decreases in wave amplitudes and phase-locking capacity. Comparing these various measures revealed significant correlations between the FFR amplitudes and most ABR wave amplitudes in the young. However, these were not present in the aged. The neuronal measures of synchrony in the IC were correlated to a greater degree with the FFRs, especially in the aged. These results suggest that different neurophysiological processes give rise to the FFRs and to the ABRs, and these processes change asymmetrically with age.

 

 

October 3, 2013

Keith Kluender, PhD (SLHS)

 

Covariance among properties renders sounds from near-indiscriminable to hyperdiscriminable

 

Stimulation from objects and events in the environment has statistical structure, and sensorineural systems exploit this structure on the path to perception. When listeners accumulate experience with novel complex sounds whose acoustic properties reliably covary, perceptual performance is predicted by statistical relationships between sound properties, not the properties themselves. Here we show that sounds systematically progress from near-indiscriminable to hyperdiscriminable (performance beyond apparent sensory limits) as they increasingly deviate from their experienced patterns of covariance. Covariance is shown to be the principal determinant of perceptual organization and plasticity, and auditory hyperdiscriminability is discovered for the first time. Results support efficient coding of statistical structure as a model for both perceptual and neural organization.

 

 

October 10, 2013

Ravi Krishnan, PhD (SLHS)

 

Cortical pitch response components indexes multiple attributes of pitch contours

Ananthanarayan Krishnan, Saradha Ananthakrishnan, Jackson T Gandour, Venkat Vijayaraghavan

 

Background: Voice pitch is an important information-bearing component of language and music that is subject to experience dependent plasticity at both cortical and subcortical stages of processing. We recently demonstrated that the EEG derived Na component of the Cortical Pitch Response (CPR) is sensitive to both pitch and its salience (similar to the MEG derived pitch onset response (POR), with its putative source in the lateral Heschl’s gyrus-presumed pitch processing center). In addition to Na, the CPR is characterized by several transient components that may also carry pitch relevant information and may provide a new, complementary window to examine the influence of language/music experience on pitch encoding as well as shed more light on the hierarchical nature of pitch processing along the auditory pathways. To this end, we characterize here the multiple transient components of the CPR, and label them in relation to specific aspects of our dynamic, curvilinear pitch stimuli (e.g., pitch onset, pitch acceleration, pitch duration, and pitch offset).

 

Methods: CPRs were recorded from 10 Chinese listeners in response to IRN stimuli, representing three variants of Mandarin Tone 2: short (T2_150), intermediate (T2_200), and long duration (T2_250). The average velocity rates (in st/s), for T2_250 (25.6), T2_200 (32.1), and T2_150 (42.7) fall within the physiological limits of speed of rising pitch changes. Each IRN stimulus contained a 500 ms noise precursor that was crossfaded in to the pitch-eliciting segment. Stimuli were presented binaurally at 80 dB SPL at a repetition rate of 0.93/sec.

 

Results: Absolute latency (Pb, Nb, and Pc) and the inter-peak latencies (Na-Pb, Pb-Nb) increased with decreasing acceleration rate, with smaller changes for Na and Pa. Peak-to-peak amplitude for components Na-Pb and Pb-Nb also increased systematically with decrease in pitch acceleration. Pa-Nc interval increased with duration. Pc/Nc latency was consistent with an offset response.

 

Conclusions: These results suggest that components of the CPR may be indexing different aspects of the pitch-eliciting stimulus. For example, components Na-Pb and Pb-Nb may index the more rapidly changing portions of the stimuli, i.e. pitch acceleration; the time-invariant Na may reflect an initial estimate of pitch onset and its salience; Pa-Nc interval corresponds best with the duration of the stimuli; and Pc/Nc reflects stimulus offset. Thus, the CPR provides a physiologic window to evaluate early, sensory level cortical processing of dynamic, curvilinear pitch stimuli that are ecologically representative of those that occur in natural speech. Thus, we are now able to investigate experience-dependent enhancements (language, music) in pitch-specific encoding at the cortical and brainstem levels concurrently that offers promise of illuminating the hierarchical nature of pitch processing along the auditory pathways.

 

Funding: Research supported by a grant from NIH (R01 DC008549-05)

 

 

October 17, 2013

 

Jeffery Lichtenhan, PhD

Assistant Professor of Otolaryngology

Washington University School of Medicine

http://otocore.wustl.edu/lichtenhanlab

 

The Auditory Nerve Overlapped Waveform (ANOW) as a new measure of low-frequency hearing: tutorial and experiments

                       

The Auditory Nerve Overlapped Waveform (ANOW) may be helpful when, say for example, quantifying infant hearing loss. Guidelines state that permanent newborn hearing loss should be treated by six months of age to significantly reduce the chances of developing abnormal speech and language. Since presently available objective tests do not work well at low frequencies, it is possible that hearing aid amplification prescriptions are over or under estimated until reliable behavioral measures are feasible. We aim to translate the ANOW to humans and determine if it can complement established objective measures that work well at high-frequencies.

 

 

October 24, 2013

Ryan Verner (BME - PhD)

 

Development of Discriminable Sensations for Auditory Cortical Prostheses

 

Intracortical microstimulation (ICMS) is a promising candidate for the restoration of hearing in special cases of clinical deafness, such as Neurofibromatosis Type II.  The broadest goal for the development of sensory prostheses is the characterization of highly informative and aesthetically pleasing sensation, which has proven more difficult for cortical prostheses as compared to cochlear implants.  Our proposed studies aim to develop stimulation parameters that provide for maximally discriminable ICMS percepts in auditory cortex.  Using a conditioned-avoidance, yes-no paradigm, rats are trained to identify differences in intensity, frequency, and spatial location of ICMS stimuli.  Recent preliminary data show intensity discrimination limens at approximately 1.5 dB (relative to 1mA, performed 3 dB above threshold) and reasonable spatial discrimination at 200-400 microns in the anterior-posterior plane.  Repeated studies using electrocorticography arrays show increased spatial discrimination limens (approx. 1200-2000 microns), as expected.  Our studies in auditory ICMS discrimination are likely regionally dependent, so these studies will be expanded to other sensory systems.

 

 

 

October 31, 2013

Kristina DeRoy Milvae, Au.D.  (SLHS - PhD)

 

Detection to discrimination: Ways to evaluate perceptual effects of cochlear gain reduction

 

Sensory systems are known to adjust to the environment.  One way that the auditory system may accomplish this is via the medial olivocochlear reflex (MOCR), an efferent pathway from the superior olivary complex (SOC) of the brainstem to the cochlea.  The MOCR reduces cochlear outer hair cell gain with acoustic stimulation.  Although this reflex is well documented, its influence on auditory perception continues to be debated.  A forward masking paradigm is used with consideration of the time course of the MOCR in an effort to isolate gain reduction masking with tonal stimuli.  Shifts in signal detection thresholds are measured as masker level is varied.  Signal thresholds are shown to grow approximately linearly as masker level is increased, but the slope depends on masker frequency.  Across subjects, maximum estimated gain reduction ranged from 20 to 40 dB for a 4000 Hz signal.  A similar paradigm was piloted to investigate perceptual effects of the MOCR on speech perception in a consonant vowel (CV) discrimination task.  Future directions will be discussed.  Research supported by NIH(NIDCD) R01-DC008327.

 

 

November 7, 2013

Joshua Alexander, Ph.D.  (SLHS)

 

The Trials and Tribulations of Amplifying Speech for Sensorineural Hearing Loss

 

Wide dynamic range compression (WDRC) is a ubiquitous feature in hearing aids that is used to repackage information in the amplitude domain for the explicit purposes of enhancing signal audibility.  Most of the previously reported investigations of the effects of WDRC release time (RT) and number of channels on speech intelligibility have examined these factors in isolation and have not considered them jointly.  When considered separately, short RT and multiple channels appear to be beneficial for audibility.  However, these two factors may also be associated with reduced temporal and spectral contrast, respectively, which could have negative consequences for speech intelligibility.  The purpose of this study is to investigate the joint effects that RT and number of channels have on recognition of sentences in the presence of steady-state and modulated maskers at different signal-to-noise ratios (SNR).  In addition, a few investigators have described changes in output SNR following processing with WDRC, therefore how different combinations of WDRC parameters affect output SNR and the role this plays in the observed findings is investigated here.  Overall, the results can be loosely interpreted as representing a balance between audibility and distortion.

 

 

**SPECIAL DAY**

November 15, 2013  (12:30-1:30, PSYC 3187)

Bill Shofner, Ph.D.  (Indiana University - SHS)

 

Auditory Perception of Pitch and Speech with Degraded Acoustic Cues: A Comparative Approach

 

Noise-vocoders have been used as a simulation of hearing with a cochlear implant.  We have been using noise-vocoding as a tool for degrading the acoustic features of complex sounds to study auditory perception, such as pitch perception.  Whether the mechanisms giving rise to pitch reflect spectral (place) or temporal (time) cues is still equivocal, because generally sounds having strong harmonic structures also have strong periodic structures.  We have found that when a harmonic tone complex is passed through a noise-vocoder, the resulting sound can have a harmonic structure with large peak-to-valley ratios, but little or no periodicity in the temporal structure.  We used noise-vocoded versions of harmonic complex tones to study pitch perception in chinchillas and human listeners. The results suggest that spectral cues contribute little if any to pitch perception in chinchillas, but spectral cues can contribute substantially to pitch perception in human listeners.  Data from current studies comparing speech perception of noise-vocoded speech sounds will also be presented.  The results of these studies will be discussed in terms of cochlear tuning in humans and non-human mammals.

 

 

November 22, 2013

Kelly Ronald (PhD student, BIO, Lucas lab)

 

Beauty in the ear of the beholder?: Linking hearing and mate-choice in a songbird

 

In general, communication involves a sender producing a signal and a receiver processing and responding to that signal after it has traveled through the environment.  Song birds have been a model system for studying the evolution of communication signals as males use song to court females; females then evaluate males’ song and make a mate-choice decision.  We now know much about how males vary in their signaling, how the environment constrains the propagation of these signals, and we are beginning to build a foundation for understanding how receiver processes those signals.  Nevertheless, the link between sensory processing and eventual mate-choice is still largely underdeveloped.  We asked whether female auditory processing can help to explain the variation we see in mate-choice in female brown headed cowbirds (Molothrus ater).  Results will be discussed in the context of sexual selection theory and implications for the evolution of communication signals will be reviewed.

 

 

 

December 5, 2013

Vidhya Munnamalai, PhD (BIO, Fekete Lab)

 

Title: Dissecting the multiple roles of Wnts in mammalian cochlear development.

 

In the mouse cochlea, the longitudinal axis confers frequency specificity for hearing while the radial axis establishes a functional dichotomy between two classes of hair cells:  inner hair cells (IHCs) and outer hair cells (OHCs). Emerging evidence show that secreted Wnts can influence the acquisition of cell identities across the radial axis of the cochlea.  Based on the spatial and temporal patterns of Wnt gene expression, as well as changes in cell fates in response to manipulation of Wnt signaling, we hypothesize that Wnt ligands function as morphogens to specify cell fates across the radial axis in the cochlea. We will test this hypothesis using both gain- and loss-of-function approaches. By temporally manipulating the Wnt signaling in vitro, we can identify downstream Wnt targets that are involved in the patterning of the cochlea. Organ cultures of E12.5 cochleas exposed to Wnt activator (CHIR99021) added at different time points showed that there are both, temporal as well as radial patterning. The goal is to reveal whether separate regulatory networks operate sequentially or concurrently across the radial axis.

 

 

 

[Spring Semester in BRNG 1232 (**Except 1/30)]

 

January 16, 2014

Alejandro Velez, PhD (BIO, Lucas Lab)

 

Auditory processing in songbirds: Seasonal effects and correlations with phylogeny, habitat, and vocal complexity.

 

In songbirds, the evolution of vocal communication signals can be constrained by habitat effects on sound propagation, morphology of sound-producing structures, and phylogenetic relationships among species. In addition, vocal communication signals change substantially across seasons; songs for territory defense and mate attraction are only produced during the breeding season while calls that function in maintaining group cohesion, alerting the presence of predators, and announcing the presence of food are produced outside the breading season. We currently know little about the factors that shape signal processing mechanisms in the receivers and how these mechanisms vary seasonally to adjust for changes in the vocal repertoire. In this seminar, I will present results of experiments using auditory evoked potentials to study the effects of vocal complexity, habitat, phylogeny, and season on different aspects of auditory processing in songbirds.

 

 

January 23, 2014

Alex Francis, PhD (SLHS)

 

Psychophysiological indices of listening effort: Preliminary results and some speculation

 

Alexander L. Francis, Megan MacPherson, Bharath Chandrasekaran, Ann Alvar

 

Older adults and listeners with hearing impairment often find it exhausting to listen to speech in background noise, even when they are successful at it. An example of a common complaint is "I don't like to go to restaurants anymore, it's just to tiring to understand what people are saying."  According to the effortfulness hypothesis, sub-clinical hearing deficits may increase cognitive demand for speech understanding, making listening in noise more effortful even when recognition performance remains. Even for typically hearing listeners, separating speech from background noise requires both segregating target speech from masking signals and selectively attending to the target while ignoring maskers. Both segregation and selection may demand cognitive resources, but it is not clear how these demands might interact with either age or hearing impairment. To begin to address this issue, it is necessary to measure listening effort independently from intelligibility, and under conditions that put relatively more emphasis on either segregation or selection. Here I will report preliminary results from a study measuring listening effort behaviorally via traditional rating scales (NASA TLX), and psychophysiologically in terms of autonomic nervous system responses (pulse period and amplitude, and skin conductance). Listeners heard and repeated sentences under conditions in which performance is dominated by energetic masking (speech masked by broad-band noise, mainly limited by segregation), or informational masking (speech masked by two-talker babble, mainly limited by separation), and also when listening to cognitively demanding speech without masking (synthetic speech).

 

January 30, 2014  ***** DIFFERENT ROOM: UNIV 317 *****

Aravind Parthasarathy, PhD (BIO, Bartlett Lab)

 

Age-related changes in the representation of simultaneous amplitude-modulated tones in the auditory brainstem and midbrain

 

Age-related changes in hearing occur as a combination of changes in the peripheral hearing organs and changes in the central auditory pathway. These changes are especially present under complex listening conditions, like processing concurrent sound stimuli. This study aims at understanding changes in processing simultaneous  sinusoidally amplitude modulated (sAM) tones in the auditory brainstem and midbrain of a rodent model of aging. Envelope-following responses (EFRs) were obtained to target sAM stimuli presented to young and aged F-344 rats, in the presence of slower masking sAM stimuli. The center frequencies of the target and masker were maintained the same, or separated to various degrees, to test within-channel and across-channel masking conditions. The signal to noise ratio (SNR) was decreased by increasing the sound level of the masker, while keeping the sound level of the target at a constant supra-threshold level. Young animals exhibited significantly larger EFR amplitudes compared to aged, both for the target and the masker, especially at low SNRs. The overall trends in masking were similar in the young and aged animals within each masking condition. In comparing across masking conditions, younger animals exhibited a greater resolution in separating the two across-channel conditions compared to the aged. These conditions were then tested on an established computational model of the auditory nerve to understand the contributions of the peripheral and the central auditory pathway in processing these stimuli. These results suggest that age-related deficits occur as a combination of reduction in over-all phase locking capacity and a decrease in resolving stimuli across multiple channels, potentially due to changes in the low-frequency tails of high frequency auditory nerve neurons.

 

 

February 6, 2014 ***** NEW ROOM REST OF SEMESTER: UNIV 317 *****

Ching-Chih Wu (ECE/SLHS, Luo Lab)

 

Stimulation and Excitation Patterns of Standard, Steered, and Spanned Partial Tripolar Modes in Cochlear Implants

 

Current steering and electrode spanning in focused partial tripolar (pTP) stimulation mode has been proposed to increase spectral resolution and improve pitch perception for cochlear implant (CI) users. In our previous studies, the pitch changes with steered and spanned pTP modes in CI users were consistent with model predictions based on the centroid rather than the peak of neural excitation pattern. This study aims to verify the model predictions and explain the inter-subject variability in pitch perception by directly measuring the excitation patterns of the experimental stimulation modes. The excitation patterns were measured at the physical, neural, and perceptual levels using the electric field imaging (EFI), electrically evoked compound action potential (ECAP), and psychophysical forward masking technique, respectively. The results showed that the centroid of excitation at all three levels shifted with the experimental stimulation modes in directions consistent with the model predictions and pitch-ranking results. For the small number of subjects, the centroid shift at any level was not correlated with the sensitivity to pitch changes with the experimental stimulation modes.

 

 

February 13, 2014

Split 25 min each:

 

Ann Hickox, PhD  (SLHS, Heinz Lab)

Auditory Nerve Coding of Concurrent Fundamental Frequencies Following Noise Exposure

 

Background. Distinguishing competing speech signals relies in part on separating talkers by differences in voice pitch, or fundamental frequency (F0). Individuals with sensorineural hearing loss (SNHL) often have increased difficulty differentiating between talkers, potentially from loss of pitch cues. Degraded coding of resolved and unresolved harmonics, e.g. reduced temporal fine structure (TFS) and envelope (ENV) cues, may contribute to this deficit. Auditory nerve (AN) responses normally contain sufficient temporal information to distinguish F0s of two concurrent harmonic tone complexes (HTCs) (e.g. Larsen et al., 2008). Here, we compare accuracy and strength of AN F0 coding in noise-exposed vs. unexposed chinchillas for F0 estimates based on TFS vs. ENV coding.

Methods. AN responses to concurrent HTCs were recorded in anesthetized chinchillas (some exposed to octave-band noise at 500 Hz, 116 dB SPL, 2 hrs). Each HTC included harmonics 2-20, where F0 of the lower complex (F01) was chosen to place the third harmonic at the fiber’s characteristic frequency (CF), and F0 of the higher complex (F02) was scaled to produce 1- or 4-semitone separation between F01 and F02. Responses to alternating polarity stimuli from a population of fibers with similar CFs (±0.6 octaves) were predicted from responses of a single fiber using spectro-temporal manipulation procedures assuming cochlear scaling invariance. Pooled responses derived from individual-CF shuffled auto-correlograms (SACs), difcors (TFS information) and sumcors (ENV information) were passed through a “pitch sieve”, or periodic template, to quantify accuracy and strength of F0 coding for each sub-population of virtual fibers.

Results. Accuracy and strength of F0 coding in temporal responses were greatest for lower CFs (< 2 kHz), as in previous studies, and SACs, difcors and sumcors produced similarly accurate F0 estimates. For some exposed fibers with elevated thresholds and spared tuning (suggestive of IHC damage), F0 estimates and coding strength were within normal bounds. For some exposed fibers with elevated thresholds and slightly poorer tuning (suggestive of OHC damage), strength of F0 coding was normal, or even enhanced, especially for sumcor-based analyses, and accuracy was not severely diminished.

Conclusions. For mild SNHL, AN representation of competing F0s (minimum 1-semitone separation) does not show diminished accuracy, but rather shows enhanced coding strength or “pitch salience” potentially related to previous reports of enhanced ENV coding. Degraded pitch discrimination ability may arise in more severe cases of SNHL from mechanisms like loss of tonotopicity or reduced “relative” TFS salience from an increased reliance on ENV cues.

 

David Axe (BME, Heinz Lab) and Taylor Remick (SLHS, Heinz Lab)

The Effects of Carboplatin Induced Ototoxic Hearing Loss on Evoked Potentials in Chinchillas

 

Background. Sensorineural damage in the periphery can cause changes within the central auditory pathway. Noninvasive evoked potentials provide a system wide physiological response to stimuli across different levels of the auditory pathway. Using these measures, the effects of peripheral sensorineural damage on temporal coding was assessed in both the periphery and more central levels.

The effects of ototoxic hearing loss on peripheral and central auditory centers was measured non-invasively in chinchillas. The chemotherapy drug carboplatin has been shown to induce IHC specific lesions causing a disruption of signal transduction in the periphery without altering the mechanical properties of the basilar membrane or the functional characteristics of the cochlear amplifier.

Methods. Non-invasive measures were recorded in anesthetized chinchillas before carboplatin exposure and across a 4-week time course following exposure. DPOAEs were collected as a measure of OHC function, ensuring that observed changes were from damage to the IHCs. Auditory Brainstem Responses (ABRs) from short pure-tone bursts were used to determine threshold and to monitor suprathreshold function at different stages of the auditory pathway. Frequency-Following Responses (FFRs) were recorded using amplitude modulated tones at varying modulation frequencies to assess changes in temporal coding following hearing loss.

Results. Preliminary findings following carboplatin exposure suggested no change in OHC function (no change in DPOAE amplitudes) and no change in ABR thresholds. In contrast, ABR wave-I showed reduced amplitudes and increased latencies following exposure. Later waves also showed decreased amplitudes and increased latencies, however the magnitude of these changes differed from the peripherally based wave-I showing less change in amplitude and more change in latency. FFRs showed a reduction in envelope coding within a few days of injection, with preliminary data in some cases suggesting a partial recovery of envelope coding as the time course progressed.

Discussion. Carboplatin induced changes in peripheral coding follow many of the same trends observed following exposure to moderate noise levels that produce synaptic degeneration without permanent threshold shift. Intuitively this makes sense as both exposures cause a decrease in the total auditory signal that is transmitted through the auditory nerve (through deafferentation following noise and through destruction of IHCs following carboplatin). Both cases result in a loss of total AN fibers able to convey information to the CNS. Studying time course effects allows for the possibility of gaining insight into the potential mechanisms underlying changes in central responses following peripheral damage.
 

February 20, 2014

Open ARO practice / feedback (posters or talks)

 

February 27, 2014

ARO wrap up (everyone describe their favorite 1-2 posters from ARO)

 

 

March 6, 2014

Elin Roverud, Au.D. (PhD student, SLHS, Strickland lab)

 

The Effects of Ipsilateral and Contralateral Noise on the ‘Mid-Level Hump’ in Intensity Discrimination

 

Listeners are generally able to discriminate small changes in sound level across a broad range of levels even in the presence of background noise.  The underlying processes involved in this ability (i.e., how the auditory system maintains a wide dynamic range) is not completely understood.  An interesting phenomenon occurs when psychoacoustical intensity discrimination limens (IDLs) are measured for a short, high-frequency tone in quiet.  Studies have shown that IDLs measured for this tone are poorer at mid-levels than at lower and higher levels.  This “mid-level hump” has been theorized to reflect a limitation posed by mid-level basilar membrane compression.  Studies have also shown that noise presented to the ipsilateral ear can reduce the mid-level hump for the tone.  If the mid-level hump is a result of compression, then this suggests that there was a reduction in compression by the noise which led to an improved IDL.  One possible mechanism for a reduction in compression with sound is the medial olivocochlear reflex (MOCR), a bilateral, sound-evoked reflex that reduces gain of the cochlear amplifier.  In this study, we examine whether the MOCR may be involved in improving IDLs in noise by observing changes in the mid-level hump with the presentation of ipsilateral and ipsilateral + contralateral (bilateral) noise.  Because the MOCR is bilateral, it was thought that the strongest effect would be observed in the bilateral noise condition.  Preliminary results show that bilateral noise leads to improved IDLs at the mid-level hump beyond the improvement produced by ipsilateral noise alone.  This result implicates the MOCR as a possible mechanism for maintaining fine intensity discrimination abilities in the presence of background noise.

 

 

March 13, 2014

Roy Lycke, M.S. (PhD student, BME, Otto lab)

 

A Chronic In Vivo assessment of micro-ECoGs for neural stimulation

 

Advancements in neural interfaces capable of neural stimulation have shown that these devices may potentially be used in order to treat injuries or disorders originating in the brain. Unfortunately, many of the current technologies utilized in order to stimulate and record from the nervous system do not suffice for this purpose; those that provide a sufficient channel density, which is required for interfacing and chronic functionality in vivo, fail quickly, while others that last for an extended period of time in vivo are limited in recording and stimulation capabilities. Of the current methodologies available, Electrocorticography (ECoG) based implants show promise for providing both high channel density interfaces as well as chronic functionality after implantation.  This talk will discuss the advantages of using ECoGs for neural interfaces and the evaluation of a micro-ECoG for chronic stimulation. We will show that implanted micro-ECoGs provide a chronic low impedance interface and the ability to reliably evoke behavioral responses months after implantation.

 

March 20, 2014

SPRING BREAK – NO MEETING

 

 

March 27, 2014

Donna Fekete, Ph.D. (BIO)

 

A subset of chicken statoacoustic ganglion neurites are repelled by Slit1 and Slit2

 

Mechanosensory hair cells in the chicken inner ear are innervated by bipolar afferent neurons of the statoacoustic ganglion (SAG). During development, individual SAG neurons project their peripheral process to only one of eight distinct sensory organs. These neuronal subtypes may respond differently to guidance cues as they explore the periphery in search of their target. The expression patterns of Slit transcripts in the developing inner ear on embryonic day 3 (E3) to E6 led to the prediction that Slit repellants might channel pioneer axons towards the primordia of the anterior and posterior cristae, while also blocking them from entering the saccular macula prematurely.  To test whether cristae afferents can be repelled by Slit ligands, Slit-expression plasmids were electroporated into their peripheral targets on E3, prior to the arrival of the afferents.  As predicted, 2-3 days later the afferent fibers failed to enter the anterior crista when confronted with ectopic Slit1 or Slit2.  However, when similarly challenged, the posterior crista afferents did not show Slit responsiveness. The sensitivity to ectopic Slits shown by the anterior crista afferents was more the exception than the rule: responsiveness to Slits was not observed when the SAG was isolated from E4 chickens and bathed in purified human Slit1 or Slit2 for 40 hours in vitro. Specifically, the corona of neurites emanating from SAG explants was unaffected by Slit treatment. Reduced axon outgrowth from E8 olfactory bulbs cultured under similar conditions for 24 hours confirmed bioactivity of purified human Slits on chicken neurons. In summary, differential sensitivity to Slit repellents may influence the directional outgrowth of otic axons toward either the anterior or posterior otocyst.

 

April 3, 2014

Jeff Lucas, Ph.D. (BIO)

Spectral and temporal coding in birds: is the envelope really that important and what does the auditory filter tell us?

                   

All auditory signals have both spectral fine structure and an amplitude envelope.  Many signals, such as the song of the zebra finch, have a particularly strong envelope resulting from stacked harmonics.  In zebra finch, cortical neurons that respond to a bird’s own song are primarily triggered by the envelope, not by the spectral properties of the song.  Thus the expectation is that species with strong envelopes will be particularly good at processing the envelope and will use this property to decode the information embedded in a song.  One aspect of the auditory system that affects the processing of temporal information is the auditory filter: broad filters facilitate temporal processing whereas narrow filters facilitate phase locking to tones.  We tested phase locking to both the envelope and fine structure of 2- and 3- tone harmonic complexes of all combinations of 1.2, 1.8 and 2.4 kHz tones.  We studied two woodland species with relatively narrow filters (white-breasted nuthatches and tufted titmice) and two open habitat species with relatively wide filters (white-crowned sparrow and house sparrow).  All 4 species have peak AM rates of about 600 Hz in their calls; only nuthatches and w-c sparrows had strong AM rates in their song, with AM rates of about 700 Hz in nuthatches and about 200 Hz in w-c sparrows.  Auditory evoked potentials were used to show that the open habitat species phase lock more strongly than woodland species to the 600 Hz envelope of these tone complexes, as predicted.  However contrary to our predictions, w-c sparrows also phase lock to all tone components of complexes as strongly, or more strongly, than all other species.  We used AEP-derived audiograms to test whether the enhanced phase locking to fine structure results from enhanced sensitivity to these tones.  In fact, the opposite is true: w-c sparrows have the poorest sensitivity of the 4 species to the tones used in our complexes.  Finally, the woodland species showed enhancement of phase locking strength to any given tone when the next highest harmonic was broadcast with that tone (e.g. phase locking to 1.2 kHz was enhanced when this was broadcast with 1.8 kHz).  Thus woodland species appear to process harmonic stacks by signal enhancement instead of emphasizing envelope processing.

 

April 10, 2014

Jesyin Lai (PULSe, Bartlett Lab)

Measurements of Distortion Product Otoacoustic Emissions and Frequency-Following Responses in Young and Aged Rats

Speech, music, animal vocalizations and natural sounds are dynamic signals with multiple time-varying modulations in amplitude and frequency. Temporal cues provided by amplitude modulation (AM) are crucial for speech recognition, and elderly listeners with normal hearing thresholds have difficulties in detecting small modulation depths. Reduced sensitivity to changes in timing of sounds in the elderly cannot be attributed solely to peripheral hearing loss. Many studies have focused on peripheral hearing loss in aging. However, limited research has been performed to study the effects of age on central auditory processing and the relationship of age-related central auditory deficits to peripheral degradation. There is a gap of knowledge on how these auditory deficits can lead to a decline in perception of complex signals. This study aims to fill in the gap by identifying the relationship of the cochlear mechanics in AM processing to neural measures of central temporal processing of AM. In this study, we assess cochlear mechanics using distortion product otoacoustic emissions (DPOAEs) and evaluate central temporal processing ability using frequency-following responses (FFRs). DPOAEs measure the cochlear mechanical output in response to two-tone stimuli and are correlated with outer hair cell function. FFRs reflect sustained neural responses to temporally modulated sounds from the brainstem and midbrain. We also studied how these may change with age. We measured DPOAEs and FFRs in F-344 rats to two-tone stimuli at various f2/f1 ratios or where f1 or f2 was AM modulated to reveal the extent to which altered cochlear motions are related to the generation of neural responses and potentially to auditory filter width.

 

 

April 17, 2014

Beth Strickland, Ph.D. (SLHS)

Behavioral explorations of cochlear gain reduction

Physiological measures have shown that the medial olivocochlear reflex (MOCR) decreases the gain of the cochlear active process in response to ipsilateral or contralateral sound.  As a first step to determining its role in human hearing in different environments, our lab has used psychoacoustical techniques to look for evidence of the MOCR in behavioral results.  Well-known forward masking techniques that are thought to measure frequency selectivity and the input/output function at the level of the cochlea have been modified so that the stimuli (masker and signal) are short enough that they should not evoke the MOCR.  With this paradigm, a longer sound (a precursor) can be presented before these stimuli to evoke the MOCR.  The amount of threshold shift caused by the precursor depends on its duration and its frequency relative to the signal in a way that supports the hypothesis that the precursor has reduced the gain of the cochlear active process.  The magnitude and time course of gain reduction measured across our studies will be discussed.  The results support the hypothesis that one role of the MOCR may be to adjust the dynamic range of hearing in noise.

 

April 24, 2014

Mark Sayles, Ph.D. (SLHS, Heinz Lab)

Amplitude-modulation detection and discrimination in the chinchilla ventral cochlear nucleus following sensorineural hearing loss

Background: Amplitude modulation (AM) is a common feature of natural sounds, and an important cue in audition. Modulation supports perceptual segregation of "objects" in complex acoustic scenes, and provides information for, e.g., speech understanding and pitch perception. Previous work in our laboratory showed increased modulation gain without change in temporal modulation transfer function (tMTF) bandwidth in auditory-nerve fiber (ANF) responses to 100%-sinusoidal amplitude-modulated (SAM) tones measured in chinchillas with noise-induced hearing loss (HL), compared to normal-hearing (NH) controls. Hearing-impaired listeners' perceptual difficulties, and physiological correlates thereof, often emerge in background noise. This study aims to quantify the neural detection and discrimination of AM sounds embedded in background noise, in the ventral cochlear nucleus in NH and HL chinchillas.

Methods: The VCN is the first brainstem processing station of the ascending auditory pathway, and provides significant input-output transformations with respect to AM representation; i.e., enhanced spike synchrony to the amplitude envelope at the output of several distinct cell types relative to their ANF inputs. We recorded spike times in response to SAM tones with modulation depths between 0% and 100% from all major VCN unit types in anesthetized NH and HL chinchillas. Signals were presented in quiet, and embedded in three levels of broadband noise {10,15,20 dB} in an interleaved manner for 20 presentations. HL animals were previously exposed to 116dBSPL 500Hz-centered octave-band Gaussian noise. Spike times were analyzed in terms of synchrony to the amplitude envelope, tMTFs were calculated, and a signal-detection theoretic analysis was used to compute modulation-detection and discrimination thresholds.

Results & Conclusions: Responses were obtained from 136 isolated single units (70 NH, 66 HI). The addition of background noise increased (i.e., worsened) detection and discrimination thresholds for NH and HI animals. However, the effect of noise was greater for units from HI animals compared to NH controls. Primary-like units (corresponding to bushy cells) were more susceptible to the combined effects of noise and hearing impairment than were chopper units (corresponding to T-multipolar cells). For some unit types (transient and sustained chopper units), detection threshold in quiet was lower (i.e., better) under quiet conditions for HI compared to NH animals. The greater effect of noise in HI animals is consistent with broadened tuning in the ANF inputs to VCN, resulting from noise-induced cochlear pathology.

Funding: Supported by an Action on Hearing Loss Fulbright Commission scholarship, and NIH grant R01-DC009838.     

 

May 1, 2014

Björn Herrmann, PhD (visitor, Bartlett Lab)

Auditory Cognition Group

Max Planck Institute for Human Cognitive and Brain Sciences

Leipzig, Germany

 

Dynamic adjustments of neural activity to a temporally and spectrally changing acoustic environment

In this talk, I will present recent human EEG/MEG data from our “Auditory Cognition” lab on the dynamics with which neural activity adjusts to spectral as well as temporal information of the acoustic environment. I will show that transient neural responses in auditory cortex change in magnitude depending on the spectral variance in the stimulation. I will further show that neural oscillations in auditory cortex adjust to the temporal event structure of the stimulation, and how listening behavior is affected by this adjustment.