Skip navigation

SNAPlab receives NSF funding!

SNAPlab receives NSF funding!

SNAPlab received a 2-year NSF award for a multi-site project in collaboration with U. Minnesota, BU, Carnegie Mellon, and U. Rochester. The project is "Testing the relationship between musical training and enhanced neural coding and perception in noise"

The award information and a grant abstract can be found on NSF's award search page here: https://www.nsf.gov/awardsearch/showAward?AWD_ID=1840699&HistoricalAwards=false

This project will determine whether formal musical training is associated with enhanced neural processing and perception of sounds, including speech in noisy backgrounds. Music forms an important part of the lives of millions of people around the world, and it is one of the few universals shared by all known human cultures. Yet its utility and potential evolutionary advantages remain a mystery. This project will test the hypothesis that early musical exposure has benefits that extend beyond music to critical aspects of human communication, such as speech perception in noise. In addition, the investigators will test whether early musical training is associated with less severe effects of aging on the ability to understand speech in noisy backgrounds. Degraded ability to understand speech in noise is a common complaint among older listeners and hearing loss has been shown to be associated with social isolation and more rapid cognitive and health declines. If formal musical training is shown to affect improved perception and speech communication in later life, the outcomes could have a potentially major impact on quality of life,

Earlier studies have suggested relationships between early musical training and improved auditory neural processing and perception, but the studies' impact has been limited by small sample numbers and inconsistent methods between different studies. This project will test a large number of participants (N=360) with uniform recruitment criteria and testing protocols across six different sites. Measures will include the neural frequency following response (FFR) to speech sounds, behavioral frequency selectivity, speech perception in noise, speech perception against a background of competing talkers, pitch discrimination, and auditory masking. The participants will also complete other assessments, including a personality inventory questionnaire, a profile of musical perception skills, a spatial reasoning test to assess general cognitive ability, as well as a background questionnaire to determine socio-economic status, education, and musical background. Participants will be selected to span a wide range of ages and musical experience. The neural data and the speech perception measures will be related to factors of musical training, such as the number of years of musical training and the age at which musical training began. Scientific rigor will be assured by preregistering the study and the analyses and by making the data and analysis code publicly available via a dedicated website.