Leveraging Human Input to Enable Robust AI Systems

Event Date: January 24, 2022
Time: 10:30 am
Location: PGSC 105
Priority: No
School or Program: Electrical and Computer Engineering
College Calendar: Show
Dr. Daniel Brown
Postdoctoral Scholar
University of California, Berkeley

Abstract
 
In this talk I will discuss recent progress towards using human input to enable safe and robust autonomous systems. Whereas much work on robust machine learning and control seeks to be resilient to or remove the need for human input, my research seeks to directly and efficiently incorporate human input into the study of robust AI systems. One problem that arises when robots and other AI systems learn from human input is that there is often a large amount of uncertainty over the human’s true intent and the corresponding desired robot behavior. To address this problem, I will discuss prior and ongoing research along three main topics: (1) how to enable AI systems to efficiently and accurately maintain uncertainty over human intent, (2) how to generate risk-averse behaviors that are robust to this uncertainty, and (3) how robots and other AI systems can efficiently query for additional human input to actively reduce uncertainty and improve their performance. My talk will conclude with a discussion of my long-term vision for safe and robust autonomy, including learning from multi-modal human input, interpretable and verifiable robustness, and developing techniques for human-in-the-loop robust machine learning that generalize beyond reward function uncertainty.
 
Bio
 
Daniel Brown is a postdoctoral scholar at UC Berkeley, advised by Anca Dragan and Ken Goldberg.  His research focuses on safe and robust autonomous systems, with an emphasis on robot learning under uncertainty, human-AI interaction, and value alignment of AI systems.  He evaluates his research across a range of applications, including autonomous driving, service robotics, and dexterous manipulation.  Daniel received his Ph.D. in computer science from the University of Texas at Austin, where he worked with Scott Niekum on safe and efficient inverse reinforcement learning.  Prior to starting his PhD, Daniel was a research scientist at the Air Force Research Lab’s Information Directorate where he studied bio-inspired swarms and multi-agent systems.  Daniel’s research has been nominated for two best-paper awards and he was selected in 2021 as a Robotics: Science and Systems Pioneer.
 
Host
Phil Paré, philpare@purdue.edu

2022-01-24 10:30:00 2022-01-24 11:30:00 America/Indiana/Indianapolis Leveraging Human Input to Enable Robust AI Systems Dr. Daniel Brown Postdoctoral Scholar University of California, Berkeley PGSC 105