Pitts receives NSF award

Pitts receives NSF award

Photo of driving simulator
Pitts' National Advanced Driving Simulator (NADS), called the “miniSim”
Brandon Pitts, assistant professor of industrial engineering, has received a research grant from the National Science Foundation (NSF) for an advanced automation and aging project.

The project is titled, "CRII: CHS: Bridging the Age-Related Performance Gap: Multimodal Interfaces to Support Older Adults in Transitioning to Manual Control in Autonomous Systems".

"The goals of this project are to better understand the interactions of different groups of older adults with autonomous systems, and to begin to develop tools that support their performance, such as in situations that involve transfer-of-control," explains Pitts, head of the Next-generation Human-systems and Cognitive Engineering (NHanCE) Research Laboratory.

Pitts is the PI and will assign PhD student Gaojian Huang to the two-year project, along with one undergraduate research assistant.


Advanced autonomous systems have the potential to significantly reduce workload and extend human capabilities in a number of safety-critical transportation and work environments, such as driving, medicine, and manufacturing. However, even the most sophisticated systems are often constrained by design limits and/or may experience occasional malfunctions requiring human-in-the-loop manual interventions. To date, there is no consensus on how best to assist operators of varying age and ability levels in noticing, diagnosing, and recovering manual control across a wide range of autonomous systems. Adults 65 years and older are now the fastest growing age group, and are expected to encounter systems with increasing levels of automation throughout later stages of life, yet perceptual and cognitive challenges often prevent their effective use of such technology. The goals of this project are to better understand age-related differences in human-automation interactions, and to begin the development of methods and tools that support the manual recovery of older adults for various automated technologies. Combined sensory feedback will be explored as a potential technique to these ends, as it has been shown to improve attention management and benefit older individuals. Project outcomes will contribute to a more in-depth understanding of the capabilities and limitations of different operator demographics, and will help guide the development of next generation human-machine interfaces. The work has broader implications for enhancing safety in many complex operations, such as autonomous driving and automated process assembly. Public educational activities will include community and study population (senior) focused workshops, pre-college and summer outreach, and undergraduate research programs for underrepresented students.

Multimodal interfaces present information to the visual, auditory, and tactile sensory channels. By manipulating signal parameters, these interfaces are able to capture attention, inform operators of system status, and provide decision aids to perform needed actions. However, the extent to which this approach can effectively communicate to a range of operators with considerable variability in sensory and cognitive abilities in the context of transfer-of-authority has not been quantified. Given the rapid development of advanced autonomous technology and the projected population changes expected within the next decade, it will be critical to fill these research gaps. This project will generate age-related empirical data on complex interactions within autonomous systems. A series of experiments will be conducted using semi-autonomous driving simulations and involving subjects from different age groups. The research will quantify age-related time differences in noticing multimodal transition (takeover) requests, determine age-specific transition times as a function of lead time and sensory modality notification, and investigate the effectiveness of various tactile signals to support situation awareness and reduce transition times. Results are expected to inform quantitative and qualitative models of human perception, information processing, and performance.