Seminars in Hearing Research at Purdue
Abstracts
Talks in 2022-2023
LYLE 1150 [hybrid via Zoom: Thursdays 1030-1120am]
Link to Schedule
August 25, 2022
Elizabeth Strickland, Professor, SLHS
Behavioral measures of cochlear gain and gain reduction in listeners with normal hearing or minimal cochlear hearing loss
An important aspect of many sensory systems is the ability to adjust the dynamic range in response to the environment, so that changes are detectable. In the auditory system, physiological measures have shown that the medial olivocochlear reflex (MOCR) shifts the dynamic range of the cochlear active process in response to sound. However, the perceptual effects of the MOCR are not fully understood. We have developed behavioral techniques to measure changes in perception in response to preceding sound. These measures suggest a reduction in the amplification, or gain, of the cochlea following sound, which could be consistent with the action of the MOCR. A study in progress in the Psychoacoustics Lab examines the relationship between cochlear hearing impairment and gain reduction. If gain is permanently reduced by cochlear hearing impairment, we might expect to see less gain reduction, meaning less adjustment to the environment. We’ll examine whether this happens, and also look at the strength of gain reduction across audiometric frequencies.
September 1, 2022
Andrew Sivaprakasam, MD/PhD (MSTP) student, Weldon School of Biomedical Engineering
Differential Profiles of SNHL Variably Impact Neural Coding of Modulations and Pitch: Recent Findings and Planned Future Study
Sensorineural Hearing Loss (SNHL) can be the result of several etiologies and a major goal of modern hearing research is to reliably identify specific profiles of hearing loss to better individualize therapies and hearing-assistive technology. Specifically, those who report hearing loss likely have variable patterns in cochlear anatomic damage– be it damage to the inner hair cells, outer hair cells, or cochlear synapse. Therefore, it is critical we identify the physiological consequences of these patterns of damage and how they lead to perceptual deficits. It is known SNHL degrades the ability to discriminate pitch, the cue we depend on to properly identify voices or listen to music. However, it is not known how IHC, OHC, and cochlear synapse damage specifically contribute to this deficit. I will first present our recent findings which indicate that IHC damage and cochlear synaptopathy differentially impact neural coding of modulations and pitch in chinchillas. I will then share some preliminary data and analyses demonstrating my planned approach to link pitch discrimination deficits to abnormalities in audiological and electrophysiological measures across species.
September 8, 2022
Jeff Lucas, Professor in Biological Sciences
A tale of two stories: hearing, song and culture
I am going to update information about 2 systems I’ve talked about before, and show that the techniques from one and updated data set from the other lead to a new third, potentially exciting study system. The first study is about individual variation in signal processing in 4 species of birds. The question is whether auditory processing of song (i.e. sexually selected signals) is more variable than auditory processing of non-song elements. The answer is ‘no’. But surprisingly, individual variation is higher in some species than in others – and the species-level variation in processing is largest in a species with the most complex songs. Hmm… We evaluated this auditory processing using cross-correlations of entire Auditory Evoked Potentials (AEP) derived from song and non-song elements. The second system is Carolina chickadee song which is enormously variable across central Indiana, but nowhere else in the country that we know of. More importantly, the dimensionality of song properties across these populations is incredibly complex. This sets up a perfect system where we can ask what role auditory processing of complex sounds plays in the evolution of song-centered culture. The AEP cross-correlation technique should give us a robust approach with which to address this question. This study will allow us to ask whether auditory consequences of cultural evolution in birds is in any way analogous to auditory consequences of cultural evolution of tonal vs. non-tonal languages, a dichotomy which is apparently driven by humidity constraints on the human vocal system.
September 15, 2022
Maureen Shader, Assistant Professor, SLHS
Specific Aims Presentation: Reliable measures of functional cortical processing of speech in adult cochlear-implant recipients
September 22, 2022
Samantha Hauser, AuD. SLHS PhD student
Biomarkers of cochlear pathology beyond outer hair cell dysfunction
Unlike animal models of sensorineural hearing loss, cochlear pathology in humans cannot be confirmed histologically and is the result of an uncontrollable combination of environmental and genetic factors. However, an individual’s pattern of dysfunction across the sensory hair cells, auditory nerve fibers, and stria vascularis likely correlates with specific hearing complaints and may explain variation in suprathreshold processing and hearing-aid outcomes. Unfortunately, hearing assessment in the audiology clinic is typically limited to the audiogram, which only captures the subset of dysfunctions that interfere with audibility. For my PhD fellowship proposal to NIH, I propose a cross-species experimental design which aims to stratify individuals with sensorineural hearing loss based on their estimated profile of cochlear dysfunction, in particular, separating outer hair cell dysfunction from other deficits. In chinchillas, we will compare the effects of at least two etiologies of sensorineural hearing loss on our battery of non-invasive biomarkers. This battery will then be tested in a heterogeneous cohort of humans with hearing loss where results can be compared to speech-perception measures. Based on the profile across biomarkers, we aim to cluster the human subjects and utilize the animal data as the rationale for estimating differences in cochlear dysfunction among the subtypes. This study is an important step toward improved diagnostic precision, and personalizing audiologic care beyond restoration of audibility. [This talk will review my specific aims for a grant application (F32) in progress and due in December. Feedback regarding the aims and methodologies discussed at the seminar is greatly appreciated.]
September 29, 2022
Brittany N. Jaekel, PhD, Research Scientist – Starkey
Exploring methodologies for studying listening effort in listeners with or without hearing loss
When attempting to understand speech, listeners may exert different levels of listening effort, depending on the auditory environment, motivation, and degree of hearing impairment, among many other factors. Methods for assessing listening effort are diverse in the literature, but typically fall into three categories: subjective, behavioral, and physiological. Utilizing methods from each category (self-report, dual-task paradigm, and heart rate variability, respectively), we explored how this variety of methodologies explains the listening effort experiences of participants with normal hearing and/or participants with hearing aids.
October 6, 2022
Harinath Garudadri (UC San Diego Qualcomm Institute of Calit2, Research Scientist)
Open-Source Platform for Hearing Healthcare Research
In this presentation, I will discuss my efforts to leverage Internet of Things (IoT) and smartphone technologies to address the multitudes of complex issues in healthcare innovations and translation. We developed an open-source speech processing platform (OSP), supported by grants from NIH, NSF, Army Research Labs, Qualcomm Institute of Calit2 at UCSD, and Wrethinking, the Foundation. OSP comprises applications, algorithms, software, and hardware to extend current clinical research in situ and also enable new investigations beyond what is currently possible. The applications are enabled by an embedded web server that hosts web-apps for monitoring and controlling the realtime engines. The algorithms include realtime, embedded signal processing libraries for hearing aids functions, electrophysiology, inertial measurement units (IMUs), and other sensors. The hardware is based on smartphone chipsets with connectivity and best in the class MIPS/watt performance enabled by economies of scale. The software is based on 64-bit Linux (Debian 10, buster) with custom enhancements to the kernel and drivers. In addition, we have developed an application specific integrated circuit (ASIC) and Linux drivers for an efficient, multi-channel, sensor readout. The latest release includes a processing and communication device (PCD) in a in a wearable form factor of smaller than a deck of cards (~100 cm3), weighing 100 grams including a 3500 mAh battery. PCD connects to sensors and actuators (e.g. hearing aids) with wires and wirelessly in future. We currently are seeking collaborations to investigate the clinical benefits of OSP.
October 13, 2022
Varsha Mysore Athreya, PhD Student, SLHS
Effects of Age on Within-Channel and Across-Channel Temporal Processing and Relationship to Speech Perception in Noise
October 20, 2022
Ed Bartlett, Professor in Biological Sciences and Biomedical Engineering
Rapid Assessment of Temporal Processing from the Peripheral and Central Auditory Pathway using Dynamic Amplitude Modulated Stimuli
Background: Envelope or amplitude modulation (AM) cues in a signal are critically important for the perception of complex signals, e.g., speech. Neural coding of AM can be noninvasively probed using the envelope following response (EFR), which receives contributions from both cortical (slower fluctuations, <40 Hz) and subcortical (faster fluctuations, >100 Hz) generators. As subcortical versus cortical signatures of AM representations vary across hearing loss etiologies, the EFR has great diagnostic potential. AM representations are routinely evaluated by the temporal modulation transfer function (tMTF), which is the strength of the EFR as a function of AM frequency. Currently, tMTF is measured serially for discrete sinusoidally amplitude modulated (sAM) tones. This process is time-consuming and inefficient, impeding clinical translation. Here we present a dynamically varying AM tone (dAM) used to measure a tMTF. I will analyze the tMTF obtained with and without hearing pathologies (aging, inflammation). Methods: EEG responses were obtained using sub-dermal needle electrodes in rodent models. In rodents, carrier frequencies and amplitude modulation rates varied based on species-specific differences. Tones with dynamically varying AM, the frequency of which increased exponentially from 9 Hz to 1.5 kHz over one second, and identical carrier frequencies were also used to elicit EFRs and the tMTFs were compared. Fast Fourier transforms were used to calculate discrete EFR amplitudes. A spectrally specific frequency-demodulation-based analysis was used to calculate EFR amplitudes from dAM stimuli. Results: Robust tMTFs were obtained for discrete sAM stimuli from all species tested. Preliminary results show strong tracking of dAM envelopes in rodents, which can be used to estimate the tMTF at a fine frequency resolution. These tMTFs are comparable to those obtained using discrete sAM tones. In humans, preliminary results suggest that tracking of dAM envelopes was comparable to sAM stimuli, but only at lower modulation frequencies. Ongoing analysis is aimed at refining the dAM stimuli (trajectories, timescales) to optimize AM tracking while simultaneously reducing recording times in humans. Conclusions: These results suggest that dynamically varying AM tones can be used to efficiently estimate the tMTF. However, further optimization is needed to obtain robust tMTF estimates in humans at higher AM frequencies. When combined with spectrally specific analysis, the dAM tone can be used to substantially speed up the tMTF measuring time, paving the way for potential clinical translation.
October 27, 2022
Matt Hay
A personal journey with neurofibromatosis type 2 (NF2) and auditory brainstem implantation (ABI)
Matt Hay is an author, keynote speaker, patient advocate, university guest lecturer, and auditory brainstem implant (ABI) user. Matt lives with neurofibromatosis type 2 (NF2). He currently resides in Westfield, Indiana with his wife and children, where he works as a Consumer Insights Analyst. He is also a member of the Board of Directors of the Children’s Tumor Foundation and is the U.S. Director of Advocacy for Neurofibromatosis.
November 3, 2022
Sarthak Mangla, undergraduate student, CS
IndivHear: an Individualized Adaptive Deep Learning-based Hearing Aid
Prior to coming to Purdue, motivated by his grandmother’s experience with hearing loss and hearing aids, Sarthak developed an inexpensive smartphone-based individualized adaptive hearing aid using deep learning. For patients with mild hearing loss, hearing aids have limited user benefits due to the low accessibility of professional help, limitations in fitting and diagnostic procedures of hearing devices, and the tedious process of setting up hearing aids. For patients with moderate to profound hearing loss, hearing support is inadequate due to limitations in the individualized fitting and diagnostic procedures of hearing aids, inadequacies in the functional capabilities of existing hearing aids, and limited human-machine interfaces to steer the device based on the patient’s individual needs. In an attempt to solve these problems, Sarthak created an end-to-end solution that utilizes a novel data collection approach to train individualized deep learning networks as hearing aids. Additionally, he also developed fully automated and remote versions of traditional fitting and diagnostic procedures like Pure Tone Audiometry and Speech Audiometry.
November 11, 2022
Alexander L. Francis, Professor, SLHS
[RCR Discussion]: Practical Considerations for Preregistration in Hearing Science
In this presentation I will introduce “preregistration” within the broader context of open science practices. I will follow the example set by Brown & Strand (2022) (https://psyarxiv.com/f52gj/ ) and structure my talk around a demonstration of the process of preregistering using a study my lab group is designing to replicate previous research on the effects of wearing hearing aids on postural sway in older adults. Participants are encouraged to take a look at the Brown & Strand article beforehand, and bring a laptop to follow along to pre-register their own projects.
November 17, 2022
Eric R. Rodriguez, AuD/PhD Student, SLHS
¿Manzanas o Naranjas? [Apples or Oranges?]: Comparing the English & Spanish AzBio Sentences-in-Noise Test Corpora
Although the population of the United States is becoming increasingly diverse, the demographics of speech, language, and hearing professionals do not reflect this racial and linguistic diversity. This discrepancy is also apparent in the available speech-test batteries used during audiology assessments, such as the AzBio Sentences-in-Noise Test corpora. While the English AzBio is regarded as the “gold standard” for assessing cochlear implant candidacy in English speakers, the recently developed Spanish AzBio equivalent does not account for the dialectical or language experiences of most of the Spanish-speakers in the United States. Are these test corpora comparable in difficulty level for Spanish-speaking adults with varying dialects?
January 12, 2022
Alexander L. Francis, Professor, SLHS
Preliminary findings from a study relating the effects of hearing loss and listening conditions on postural sway to fall risk in older adults
Hearing loss is associated with increased fall risk in older adults, but multiple mechanisms have been proposed to account for this. Hearing loss may reduce spatial awareness and/or increase cognitive load, both of which may increase fall risk but may be ameliorated by hearing aid use. Here we present preliminary results from an in-progress study comparing fall incidence in daily life with postural sway measured under various listening conditions in older adults with and without hearing aids. Seventeen adults aged 65 to 81 years (7 using bilateral hearing aids, 10 without) stood with feet together and eyes closed, listening to noise vocoded speech (4, 8, 16 channels) and spatially distributed environmental sounds (silence, 1, 3 sources) for 1 minute per condition while postural sway was recorded in the lab. Participants subsequently reported daily near-falls and falls for 4 months. Results suggest hearing aid users show less postural sway, potentially indicating greater rigidity which has been associated with greater fall risk. However, near-fall incidence was lower for hearing aid users. Further analyses showed higher near-fall rates associated with higher auditory thresholds, but hearing aid use may mitigate this trend. We are currently collecting more data with a wider variety of participants.