Skip navigation

Seminars in Hearing Research at Purdue

 

Abstracts

Talks in 2024-2025

Nelson 1215 [Thursdays 12:00-1:00pm]

Link to Schedule

 

August 22, 2024

Eric Rodriguez AuD, PhD Student, SLHS

 

Speech perception outcomes of Advanced Bionics V1 cochlear implant recipients.

Cochlear implants are the gold-standard treatment for severe to profound hearing loss in children and adults. As the most successful sensory prosthetic devices, cochlear implant outcomes have shown significant improvements in speech perception in quiet for its recipients. However, these devices are not infallible and are occasionally susceptible to failure. This study aims to evaluate the impact of re-implantation on speech perception outcomes in patients who received the Advanced Bionics Ultra HiRes (V1) or Ultra HiRes 3D (V1) devices, which were later recalled in February 2020. The primary goal of this project was to determine whether patients show significant improvement in speech perception scores following re-implantation compared to their performance after the implantation of their original device.


 

August 29, 2024

Afagh Farhadi PhD, Postdoctoral Researcher, SLHS

 

Understanding the Physiological Roles of MOC Efferent Pathways for Hearing in Noise

To leverage the potential benefits of the medial olivocochlear (MOC) efferent system in hearing aids, it is essential to study the specific neural mechanisms within the MOC system and how they are altered by hearing loss. The MOC system dynamically adjusts cochlear gain based on input pathways, including two major inputs from the auditory periphery: the inferior colliculus (IC) and the cochlear nucleus (CN). While the CN input has been extensively studied, the IC input remains less explored. IC cells in the midbrain are sensitive to low-frequency fluctuations in auditory-nerve (AN) responses, potentially conveying spectral information encoded within these fluctuations. Computational modeling from my PhD thesis showed that this distinct information provided by the IC input to the MOC system can explain auditory phenomena that cannot be fully accounted for by considering only the CN input. The complexity and lack of detailed physiological data on MOC inputs, particularly from higher-level projections such as the IC, underscore the need for innovative physiological methods and approaches. To address this issues, we outline three specific aims, each involving novel physiological methods to isolate and manipulate individual MOC pathways. The goal is to create a comprehensive dataset that significantly enhances our understanding of the MOC efferent pathways. In Aim 1, we will explore the role of the IC input to the MOC system by varying modulation frequency within a forward-masking paradigm, while simultaneously recording envelope following responses (EFRs) and transient-evoked otoacoustic emissions (TEOAEs) to track MOC-induced changes in cochlear gain and neural responses. This will provide a detailed evaluation of the MOC efferent system, focusing specifically on the role of the IC input. Aim 2 will investigate the relative role of the CN input to the MOC system by using a temporary threshold shift (TTS) noise exposure animal model of cochlear synaptopathy, which will isolate the wide-dynamic range CN pathway while leaving the IC input relatively unaffected. Finally, Aim 3 will examine the effects of sensorineural hearing loss (SNHL) on MOC output projections to the outer hair cells (OHCs) by measuring how neural coding is influenced with and without efferent electrical stimulation in both hearing-loss and normal-hearing animals.


 

September 5, 2024

Edward Bartlett, Associate Dean for Undergraduate Affairs, College of Science & Professor, Depts. Biological Sciences and Biomedical Engineering

Brandon Coventry, Post Doctoral Fellow at the Wisconsin Institute for Translational Neuroengineering

 

Practical Bayesian Inference in Neuroscience: Or How I Learned to Stop Worrying and Embrace the Distribution

Typical statistical practices in the biological sciences have been increasingly called into question due to difficulties in the replication of an increasing number of studies, many of which are confounded by the relative difficulty of null significance hypothesis testing designs and interpretation of p-values. Bayesian inference, representing a fundamentally different approach to hypothesis testing, is receiving renewed interest as a potential alternative or complement to traditional null significance hypothesis testing due to its ease of interpretation and explicit declarations of prior assumptions. Bayesian models are more mathematically complex than equivalent frequentist approaches, which have historically limited applications to simplified analysis cases. However, the advent of probability distribution sampling tools with exponential increases in computational power now allows for quick and robust inference under any distribution of data. Here we present a practical tutorial on the use of Bayesian inference in the context of neuroscientific studies in both rat electrophysiological and computational modeling data. We first start with an intuitive discussion of Bayes' rule and inference followed by the formulation of Bayesian-based regression and ANOVA models using data from a variety of neuroscientific studies. We show how Bayesian inference leads to easily interpretable analysis of data while providing an open-source toolbox to facilitate the use of Bayesian tools.


 

September 12, 2024

Arianna LaCroix, Assistant Professor, SLHS

 

Specific Aims Presentation: Feasibility of a music-based intervention to promote cognitive-linguistic and neural recovery in aphasia

Aphasia is a disorder marked by impairments in language and cognition. There is substantial variability in how well people with aphasia (PWA) respond to treatment. This variability likely stems from aphasia therapy primarily focusing on language treatments, despite intervention response being predicted by both language and cognitive measures. Addressing cognition is crucial to improving the effectiveness of aphasia treatment programs, as PWA with cognitive deficits have worse rehabilitation outcomes than those without. Attention is a prime cognitive target for aphasia treatment because it is a foundational process that supports other cognitive functions, including language, and the functional connectivity of the brain’s attention networks predicts intervention response. However, there are currently few treatments that target attention in PWA, and those that do lack generalizability to the attention resources that support language. Music-based interventions (MBIs) may provide an alternative mechanism for treating attention deficits in PWA. Music may improve attention by increasing the functional connectivity within the brain's attention networks, which may allow an individual to better capitalize on conventional speech and language therapy. The purpose of this R34 grant is to examine the feasibility and acceptability of a well-established MBI for use with PWA. Aim 1 will pilot the intervention (music listening) and active control (audiobook listening) in two groups of PWA. Attention and language will be assessed for all participants before, after, and every two weeks during the intervention. Additionally, structural and functional MRI scans will be collected from a subset of participants in both groups before and after the intervention. In Aim 2, we will assess the feasibility of our recruitment, retention, and data collection procedures. The results from this proposal will serve as pilot data for a fully powered R01 exploring whether MBIs induce changes in the brain that improve attention and language abilities in post-stroke aphasia.  


 

 

September 19, 2024

Alex Hustedt-Mai, Au.D., CCC-A and Michael G. Heinz, Ph.D. Professor and Associate Head for Research

 

Accessible Precision Audiology Research Center (APARC) Opening at the 16 Tech Innovation District in Indianapolis

Currently over 15% of American adults (40 million) have difficulty hearing. Untreated hearing loss is associated with increased cognitive decline, dementia, social isolation, falls and mental health disorders. Alarmingly, those with untreated hearing loss have 46% higher healthcare costs than those without trouble hearing. Despite this, only 1 in 6 people who need hearing aids have ever used them. A likely contributing factor is that the benefit of hearing aids remains limited due to the lack of standardized diagnostics for the known subtypes of sensorineural hearing loss, e.g., two patients having identical hearing loss clinically, but having vastly different abilities to understand speech in noise in the real world. With support from the Life and Health Sciences Summit sponsored by the Purdue Office of Research and Provost’s Office, clinical and research faculty from SLHS, Computer Science, Biology, and Biomedical Engineering are launching APARC this Fall. The team leverages Purdue’s expertise at the intersection of audiology, auditory neuroscience and AI-driven data analytics to focus on the need for precision audiology diagnostic measures. Located at the 16 Tech Innovation District in Indianapolis, APARC is uniquely situated next to the Artisan Marketplace (AMP) food court to provide access to diverse subject populations (e.g., across socio-economic status, race, and hearing profile), as well as to serve as a hearing-health hub for the Indianapolis community. The goals for this project are to 1) provide accessible hearing testing and educate the community on health and economic consequences of untreated hearing loss, 2) understand underlying barriers to pursuing hearing-health solutions, 3) develop and validate accessible approaches to audiological assessment, and 4) develop a large diverse AI-driven data resource to support precision audiology. APARC provides unique opportunities for students and faculty from audiology, engineering, and data science in West Lafayette and Indianapolis to synergize efforts to develop precision audiology approaches to reduce the burden of untreated hearing loss.


 

September 26, 2024

Shawn S. Goodman, PhD. Director of Graduate Studies, Dept. of Communication Sciences and Disorders. University of Iowa

 

Fast, Comprehensive Characterization of Middle Ear Muscle Reflex Dynamics

Middle-ear muscle reflex (MEMR) thresholds are routinely measured in the audiology clinic. Recent research has shown that MEMR thresholds are elevated in people with evidence of afferent synapse loss, despite normal audiometric hearing (i.e., “hidden hearing loss”). To measure MEMR, acoustic elicitors are classically presented in discrete steps in sound level and frequency while a 226 Hz probe tone is used to monitor reflex activation. Clinical and research data efforts could benefit from faster data collection. We recently developed a MEMR measurement paradigm that makes rapid measurements with an elicitor noise that continuously sweeps in level and a probe click that simultaneously measures over a broad range of test frequencies. Our measurements characterize the MEMR in terms of onset and offset thresholds, growth rates, hysteresis, and peak levels. Measurements from 30 participants will be presented, including retest reliability. Measurements from the new paradigm will be compared with those obtained using discretely varying stimuli.


 

October 3, 2024

Srivatsun Sadagopan, Associate Professor, University of Pittsburgh

 

Auditory cortical computations for robust vocalization recognition.

Ethologically important sounds such as human speech and animal vocalizations are typically produced with tremendous between-subject and inter-trial variability. These sounds are also encountered in highly variable listening environments. A central function of auditory processing is to generalize over this variability and group sounds that carry distinct behavioral meanings into discrete categories. In this talk, I will describe a theoretical model of how such categorization can be achieved in the case of animal vocalizations. Using guinea pigs performing vocalization categorization tasks as an animal model, I will present electrophysiological and behavioral experiments that validate the model. I will then propose a framework for modeling attentional enhancement of sound category representations and describe ongoing experiments to test model predictions using large-scale neural recordings in behaving animals. In summary, these theoretical and experimental results will propose a biologically interpretable hierarchical model of auditory processing in which early acoustic representations are transformed into downstream goal-directed representations that support specific behaviors.


 

October 10, 2024

Elizabeth Strickland, Professor, SLHS

 

The effects of preceding sound on psychoacoustic tuning curves measured in simultaneous and forward masking.

The medial olivocochlear reflex (MOCR) decreases the gain of the cochlear active process in response to sound. We have used psychoacoustic techniques to show behavioral effects of gain reduction, which could be consistent with the MOCR. We have used paradigms understood to measure frequency selectivity and the input/output function at the level of the cochlea using stimuli (masker and signal) that should be too short to evoke the MOCR. A precursor sound is then presented before these stimuli to evoke the MOCR. Our most recent studies have used forward masking, to avoid the complicating effects of suppression. The current study was designed to examine the effects of suppression, by comparing the effects of a precursor on frequency selectivity measured using forward masking and simultaneous masking. Psychoacoustic tuning curves were measured using simultaneous and forward masking, with and without a precursor. This allowed us to measure the change in tuning with and without the presence of suppression. Broadband precursors and tonal precursors at masker frequencies were measured because previous studies have shown broadening with broadband precursors and sharpening when tonal maskers are the precursors. Results will be discussed in the context of current understanding of the MOCR.


 

October 17, 2024

Jeffery Lucas, Professor, BIO

 

What in the world is animal communication?

In the old days (really, the ‘good old days’), animal communication was defined as an interchange between a signaler and a receiver during which the signaler produces a signal that propagates through the environment, after which the receiver responds to the signal with some predictable behavior. The process was expected to be adaptive for the signaler and may or may not be adaptive for the receiver. This simplistic landscape has now been turned on its head. Now adays (really, ‘these days…’), the concept of animal communication has become extraordinarily complex. I will cover animal communication over both time frames, and at the very least talk about some cool examples.


 

October 24, 2024

Charlotte Garcia, Postdoctoral Researcher, University of Cambridge

 

Electrophysiological measures of auditory perception in cochlear implant users from the electrode-neuron interface to the cortex.

Cochlear Implants (CIs) are arguably the most successful neuro-prosthetic device today and restore auditory perception to severe-to-profoundly deaf individuals by directly stimulating the auditory nerve. Many people do very well with their devices, but there is significant variability between users, both in their hearing pathologies and their ability to understand speech through their implants. While stimulation settings are adjusted in clinic to achieve appropriate volumes of sound perception for each cochlear-implant user, this is a subjective and time-consuming process that is infeasible for infants, and is not optimized to each individual patients’ unique pattern of hearing loss. To improve speech perception for those who struggle to communicate with their CIs, development of objective, electrophysiological measures of auditory perception can help to characterize the interaction between each individual patients’ implant and their brain. These measures could then be leveraged to optimize speech perception for individual patients. This seminar will focus on two projects whose aims are to characterize auditory perception of individual CI users using electrophysiological techniques. At the periphery, the Panoramic ECAP Method aims to characterize the interface between the electrodes of a CI and the auditory nerve they stimulate along the length of the cochlea. This is done using measurements of the population-level compound action potentials of auditory nerves in response to stimulation with a CI, and is recorded using the electrodes of the device itself. The method provides estimates of current spread and neural responsiveness and it’s variation along the length of an individual CI user’s cochlea, and has great potential for translation to clinic because no additional hardware is needed to run the test. However, this only characterizes the periphery of electrical hearing and cannot be representative of any higher-order auditory perception. Electroencephalography (EEG) can be used to measure auditory responses at the cortical level, but this can easily be masked by the electrical artefacts from CI stimulation of the auditory nerve. ALFIES (ALternating Frequency Interleaved Electrical Stimulation) is a method for extracting cortical neural responses from stimulation artefacts at stimulation rates representative of standard CI programming strategies. This is done by stimulating with two interleaved amplitude-modulated current-pulse trains, and relies on the assumption that while the EEG system is linear, there is smoothing in the brain that precedes its nonlinearities and results in a perceptible distortion product at a frequency where there is no electrical artefact. The recording system of the CI itself has also been investigated to determine whether the response can be captured without the need for EEG equipment. Both these techniques could be used to characterize auditory perception in CI users where it is not possible to measure behavioural responses (such as with infants). Development and translation of these sorts of personalized hearing healthcare techniques may improve speech perception for cochlear-implant users that would otherwise struggle to communicate with the auditory world around them.


 

October 31, 2024

Abigail L. Metzger. AuD Student, SLHS

 

Comparing active vs passive auditory fNIRS experiments: Effects of response format and physiology correction.

Functional near infrared spectroscopy (fNIRS) is a neuroimaging tool used to measure changes in the concentration of oxygenated and deoxygenated hemoglobin in the brain associated with neural activity. fNIRS has proven to be a useful tool in auditory neuroimaging experiments, both in passive listening and active response experimental designs. Passive listening tasks are chosen to avoid the introduction of noise to the data, whereas active response tasks are chosen as a more ecological assessment of listening and communication. However, it is not clear as to which type of task results in better estimation of the underlying auditory-evoked neural activity. This study aims to investigate the differences in evoked activity measured by fNIRS between passive listening tasks and active response tasks. Additionally, results are compared before and after controlling for systemic physiology signals (heart rate, blood oxygen saturation (SpO2), photoplethysmography (PPG), and respiration) to investigate the effects these signals have on the measured changes in hemoglobin concentration and whether those effects differ between passive vs active tasks.


 

November 7, 2024

Fernando Aguilera de Alba, PhD Student, BME

 

Acute Peripheral and Central Auditory Deficits Following Continuous Aircraft-Carrier Noise Exposure at Moderate Sound Levels.

In 2024, the Veterans Benefits Administration reported that tinnitus and hearing loss are some of the most prevalent service-connected disabilities, accounting for 12% of all veteran compensations. Service members are often exposed to varying types of damaging sounds, which may be present continuously (e.g., aircraft carriers) or just briefly (e.g., improvised explosive devices, IEDs). It is imperative to understand how exposure to different sounds affects auditory processing across the auditory pathway, especially at moderate sound levels considered to be non-damaging. Awake chinchillas (n = 18, 9 female) were exposed to noise mimicking aircraft-carrier conditions experienced by U.S. Navy service members. Animals were exposed for four consecutive weeks (40 hours/week) at 87.5 dBA using chinchilla middle-ear absorbance to determine sound-level weighting. Auditory assessment was performed pre- and post-exposure (1, 2, and 4 weeks from the start of noise exposure) using the following measures: tympanometry, wide-band middle-ear muscle reflex (WB-MEMR), otoacoustic emissions (OAEs; swept distortion product, DP; swept stimulus frequency, SF; transient evoked, TE), auditory brainstem response (ABR), and envelope frequency response (EFR) to modulated sounds. Sedated measures (ABR and EFR) were performed under anesthesia while awake measures (tympanometry, WB-MEMR, and OAEs) were collected with the animals restrained and fully conscious. DPOAEs: Reduced amplitudes at mid-to-high frequencies (5-20 dB shift at 2-12 kHz) as soon as 1-week post-exposure with no sign of recovery or worsening up to 4 weeks. SFOAEs: Progressive amplitude reduction at mid-to-high frequencies (5-15 dB shift at 3-8 kHz) with no recovery. TEOAEs: Mixed effects were observed across all frequencies. WB-MEMR: Absorbed power was progressively reduced up to 2 weeks post-exposure, but partially recovered by week 4. ABR: Hearing thresholds were elevated across all frequencies resulting in acute mild hearing loss with no signs of recovery. EFR: Reduced neural coding of middle harmonics was evident at 1-week post-exposure with no signs of recovery. Our multi-metric auditory framework has highlighted potential hearing deficits and how these deficits develop over time due to continuous noise exposure up to 4 weeks post-exposure. Both peripheral and central deficits appeared within one week of noise exposure and persisted up to 4 weeks. These findings will help elucidate the relative contributions of peripheral and central damage to the overall development of hearing loss due to continuous noise exposure. The second part of the project will evaluate auditory deficits following blast-induced injury using the same auditory framework.


 

 

November 14, 2024

William Salloom. Post-Doctoral Researcher, USC (University of Southern California)

 

Perceptual Masking and Cochlear Suppression Measured using Frequency Sweeps

The auditory system's sensitivity to rapid frequency changes is crucial for speech and music perception. Human behavioral studies using frequency-modulated maskers have shown that the direction of modulation—either increasing or decreasing in frequency with time—significantly affects masked thresholds of a tonal signal. While this effect has traditionally been attributed to cochlear dispersion, recent animal studies suggest that nonlinear cochlear suppression may also be a key factor. To explore this, a combination of behavioral and otoacoustic measures was used to examine the roles of dispersion and suppression in normal-hearing humans.


November 21, 2024

Joshua Alexander, Associate Professor, SLHS

 

Minimizing Disruptions, Maximizing Recall: The Role of Sudden Sound Control in Hearing Aids.

This study explored if a novel hearing aid algorithm designed to reduce the impact of both sudden loud and soft sounds can enhance auditory-cognitive outcomes in individuals with hearing loss by enabling smoother sentence recall and storage amidst challenging listening environments.  Participants completed a comprehensive audiological evaluation and cognitive assessment before engaging in a listening experiment to understand the effects of various sound environments on speech perception.  Participants listened to recorded sentence pairs embedded with sudden loud (e.g., gunshot, door slam) and soft (e.g., keyboard typing, footsteps) sounds that, although unlikely to mask the speech, were hypothesized to interfere with sentence storage and retrieval processes.  Four settings for sudden sound reduction were evaluated (off, low, high, and maximum), to investigate if the novel algorithm might improve cognitive processing by mitigating the disruptive effects of unexpected sounds.  Performance was measured by having participants recognize, store, and retrieve sentences under each condition, while subjective ratings captured their preferences for each sound reduction setting.  Preliminary results indicate that while most participants preferred the “maximum” setting for sudden sound reduction, only a subset demonstrated measurable improvements in speech comprehension and recall, most often with the “high” setting.  The discussion will explore whether specific cognitive and auditory factors distinguish these individuals from others, contributing to an emerging approach — “precision audiology” — that tailors hearing aid settings to individual cognitive and perceptual profiles for optimal benefit.



November 21, 2024

Joshua Alexander, Associate Professor, SLHS

 

Minimizing Disruptions, Maximizing Recall: The Role of Sudden Sound Control in Hearing Aids.

This study explored if a novel hearing aid algorithm designed to reduce the impact of both sudden loud and soft sounds can enhance auditory-cognitive outcomes in individuals with hearing loss by enabling smoother sentence recall and storage amidst challenging listening environments.  Participants completed a comprehensive audiological evaluation and cognitive assessment before engaging in a listening experiment to understand the effects of various sound environments on speech perception.  Participants listened to recorded sentence pairs embedded with sudden loud (e.g., gunshot, door slam) and soft (e.g., keyboard typing, footsteps) sounds that, although unlikely to mask the speech, were hypothesized to interfere with sentence storage and retrieval processes.  Four settings for sudden sound reduction were evaluated (off, low, high, and maximum), to investigate if the novel algorithm might improve cognitive processing by mitigating the disruptive effects of unexpected sounds.  Performance was measured by having participants recognize, store, and retrieve sentences under each condition, while subjective ratings captured their preferences for each sound reduction setting.  Preliminary results indicate that while most participants preferred the “maximum” setting for sudden sound reduction, only a subset demonstrated measurable improvements in speech comprehension and recall, most often with the “high” setting.  The discussion will explore whether specific cognitive and auditory factors distinguish these individuals from others, contributing to an emerging approach — “precision audiology” — that tailors hearing aid settings to individual cognitive and perceptual profiles for optimal benefit.



December 5, 2024

Meredith Christine Ziliak, PhD Candidate, BIO

 

The Progression of Damage in the Peripheral and Central Auditory Systems Following Small Arms Fire-Like Noise Exposure

Small arms fire-like (SAF) noise is an acute form of noise exposure commonly found in military and law enforcement occupations, as well as recreational activities. While it is well-known that noise exposure damages the auditory system, most studies focus on identifying changes in response to extended narrowband noise exposures. It is still unclear how SAF noise induces damage to the peripheral and central auditory system. Additionally, the studies that have investigated the effects of SAF noise on the auditory systems have primarily looked at either the immediate or longitudinal changes (Altschuler et al., 2019). Therefore, the purpose of this study is to identify the progression of functional changes in the peripheral and central auditory systems in response to SAF noise using distortion product otoacoustic emissions (DPOAEs), thresholds, auditory brainstem responses (ABRs), and middle latency responses (MLRs). We hypothesize SAF noise exposure will result in phases of damage that differentially affect cochlear and neuronal function. Acute damage up to four weeks may resemble phenotypes consistent with traditional noise induced damage such as reduced DPOAEs, thresholds, and wave 1, but early longitudinal damage four weeks and after may resemble electrophysiological phenotypes consistent with aging or damage induced by cellular stress and chronic dysfunction to damaged neurons. To test these hypotheses, F344 rat subjects (3-6 months) were exposed to SAF noise at either 120 dBpSPL (SAF exposure group; n=8; F=4) or 60 dBpSPL (sham group; n=4, F=2). At baseline and 7-, 14-, 28-, and 56-days post-exposure, we measured DPOAEs (f2 = 4, 8, and 10 kHz, f2/f1 = 1.22) (L1 = 60 dB, L1-L2 = 10 dB), ABRs (0.03 ms click, 8 kHz, and 10 kHz), and MLRs (repeated click groupings of decreasing inter-click-intervals, called 1-2-8 clicks). Thresholds were found to be persistently elevated. Distortion product (DP) amplitudes were found to be persistently decreased at 8 and 10 kHz, but not 4 kHz. ABR waveform and peak analysis demonstrated an overall decrease in amplitude with a greater decrease in wave 5. MLR peak analysis demonstrated patterns of damage distinct to each click across days. All measures demonstrated a general trend of damage before minor recovery or plateau. Our findings suggest that the diagnostic profile of SAF noise exposure may differ from previously studied models of noise induced hearing loss. Future work will identify mechanisms of damage at different time points post-exposure through anatomical imaging of biomarkers of damage to associate change in function with structure.



January 8, 2025

Andrew J. Oxenham, Departments of Psychology and Otolaryngology, University of Minnesota

 

Brain representations and perceptual development of pitch and timbre

Pitch and timbre are two fundamental features of auditory perception. Although often treated as independent, they have been found to interact and interfere with each other. In some recent behavioral work, we have confirmed and quantified perceptual confusion between pitch, based on fundamental frequency (F0), and brightness, based on the spectral centroid. In search of potential neural substrates of this confusion, we used functional magnetic resonance imaging (fMRI) to study how both pitch and brightness are represented within human auditory cortex. We found evidence for systematic mapping of both dimensions, but also evidence for interactions between them at both the local (single-voxel) and global levels. We propose that the interactions observed perceptually and neurally represent an efficient way to encode statistical covariations that occur within the natural environment. In support of this proposal, we have found some intriguing evidence that 3- and 7-month-old infants can discriminate changes in pitch and brightness in the presence of interference from the other dimension, at levels that exceed those found in adults without musical training. This pattern might be expected if young infants have not yet learned the statistical covariation between pitch and brightness, just as young infants have been shown to have not yet prioritized phonemic distinctions relevant to their native language. Finally, while cortical plasticity is implied by our findings, a separate multisite study on subcortical frequency-following responses (FFRs) to periodic stimuli has failed to replicate earlier findings of stronger responses in musicians than non-musicians. The results suggest that brain plasticity involved with adapting to our auditory environment may be cortical in nature and may not extend to subcortical structures.



January 16, 2025

Malinda McPherson Assistant Professor, SLHS

 

Eye Hear You: Comparing Auditory and Visual Memory Capacity and Structure.

While there is growing interest in auditory/visual similarities, differences, and interactions, it is not always obvious how to compare these, and other, sensory domains. However, building broad theories of perception will ultimately require understanding how all the senses integrate and store information. In this talk, I will discuss experiments that tested new approaches for comparing visual and auditory memory capacity and structure. Based on previous results showing visual memory is better than auditory memory, we hypothesized that visual stimuli might be more inherently dissociable (dissimilar) than auditory stimuli, suggesting that with randomly selected stimuli, auditory memory performance would appear worse than visual memory simply because of the confusability of the memory probes. To test this, we use developments in deep convolutional neural networks to select well-controlled stimuli ranging from maximally similar to maximally dissimilar for both visual and auditory stimuli. We also predicted that the presentation mode (simultaneous vs. sequential) would differentially impact auditory vs. visual memory. We found that hearing is more sensitive to similarity structure than vision: auditory performance worsens more rapidly than visual performance as memory probe similarity increases. Still, by changing experimental paradigms, we could observe better overall auditory memory performance than visual performance, vice versa, or comparable performance. Therefore, when comparing vision and audition, or even when examining memory for different stimuli within audition, the choice of stimuli and experimental parameters can drastically change the ultimate conclusions. Overall, information is retrieved differently in vision and hearing: vision is inherently spatial, and hearing is intrinsically temporal, and it is critical to account for these and other differences when working across the senses.



January 23, 2025

Elizabeth Marie Jensen, Aud Student, SLHS

 

Enhancing the Specificity of Clinical Middle-Ear Muscle Reflex Measures.

The middle-ear muscle reflex (MEMR) is assessed in audiology clinics as a cross-check with other measures in the test battery or to rule out retro cochlear pathologies. Reflexes are measured by recording the change in middle-ear admittance using a low-frequency probe tone, typically 226 Hz, in response to a high-level tone or broadband reflex elicitor. However, because each ear is unique and different pathologies can impact the impedance of the middle ear, the probe frequency that shows the greatest change in response to the MEMR for each individual may not always be near 226 Hz. Specifically, individuals presenting with absent reflexes or elevated reflex thresholds with a 226 Hz probe may exhibit more robust MEMR-related changes with a different probe stimulus. The specificity of the MEMR may thus be enhanced with other probe tones or, more efficiently, with a wideband probe like clicks. This study investigates the effect of probe stimulus on MEMR thresholds. Results suggest use of a wideband probe for MEMR measurements may more accurately and efficiently capture reflex thresholds. Although WB-MEMR is not available in most clinics, our results suggest other probe tones, such as 678 Hz, should be used to determine whether the MEMR is absent in patients where retro cochlear pathology is suspected. Ultimately, these results can reduce the medical burden of over referring for imaging procedures and the emotional strain on patients.



February 6, 2025

Elle O'Brien, PhD University of Michigan

 

How is generative AI changing programming for scientists?

Scientific software is at the heart of research projects in nearly every domain. With the increasing availability of generative AI tools like GitHub Copilot and ChatGPT, the practice of programming for research is sure to be affected. Because scientific conclusions frequently depend on code for data analysis, collection, visualization and simulation, the potential impacts of these tools on both basic and translational research may be substantial. This talk will present findings from a study of how scientists in several research areas are using generative AI programming assistants. We'll discuss potential impacts of increased reliance on generative AI code tools on the validity, correctness, and maintainability of scientific code. We'll also examine what kind of training, practices, and tooling might help scientists best take advantage of generative AI software tools while mitigating risks, with an emphasis on considerations for translational disciplines like hearing science.



February 13, 2025

Edward L Bartlett, Associate Dean, College of Science. Professor of Bio and BME

 

Focal Infrared Neural Stimulation Propagates dynamic Transformations in Auditory Cortex.

Significance: Infrared neural stimulation (INS) has emerged as a potent neuromodulation technology, offering safe and focal stimulation with superior spatial recruitment profiles compared to conventional electrical methods. However, the neural dynamics induced by INS stimulation remain poorly understood. Elucidating these dynamics will help develop new INS stimulation paradigms and advance its clinical applications. Aim: In this study, we assessed the local network dynamics of INS entrainment in the auditory thalamocortical circuit using the chronically implanted rat model; our approach focused on measuring INS energy-based local field potential (LFP) recruitment induced by focal thalamocortical stimulation. We further characterized linear and nonlinear oscillatory LFP activity in response to single-pulse and periodic INS and performed spectral decomposition to uncover specific LFP band entrainment to INS. Finally, we examined spike-field transformations across the thalamocortical synapse using spike-LFP coherence coupling. Results: We found that INS significantly increases LFP amplitude as a log-linear function of INS energy per pulse, primarily entraining to specific LFP bands. A subset of neurons demonstrated nonlinear, chaotic oscillations that were sensitive to information propagation across thalamocortical circuits. Finally, we utilize spike-field coherences to correlate spike coupling to LFP frequency band activity and suggest an energy-dependent model of network activation resulting from INS stimulation. Conclusions: We show that INS reliably drives robust network activity and can potently modulate cortical field potentials across a wide range of frequencies in a stimulus parameter-dependent manner. Based on these results, we propose design principles for developing full coverage, all-optical thalamocortical auditory neuroprostheses.