Verification of Learning-enabled Autonomous Cyber-Physical Systems
Event Date: | November 18, 2021 |
---|---|
Time: | 10:30 am |
Location: | POTR 234 |
Priority: | No |
School or Program: | Electrical and Computer Engineering |
College Calendar: | Show |
Assistant Professor of Computer Science and Engineering
University of Nebraska-Lincoln
Abstract
Over the last two decades, there has been a blossom in artificial intelligence (AI), with implications reaching everywhere from healthcare, marketing, banking, gaming to the automotive industry. Although AI is powerful and performs even better than humans on many complicated tasks, it has stimulated a longstanding debate for many years between researchers, tech companies, and lawmakers as to whether we can bet human lives on AI? To be able to use AI in safety-critical applications, there is an urgent need for methods that can prove the safety of AI systems. Conventional methods for demonstrating the safety of AI systems using extensive simulation and rigorous testing are usually costly and incomplete. For example, to achieve the catastrophic failure rates of less than one per hour, autonomous vehicle systems need to perform billions of miles of test- driving. More importantly, these driving tests do not cover all corner-cases that may arise in the field. Consequently, new approaches based on formal methods, safe planning and synthesis, and robust learning are urgently needed for not only proving but also enhancing the safety and reliability of AI systems. In principle, these new approaches can automatically explore all unforeseen scenarios when verifying or falsifying the safety of AI systems. They also can generate provably correct planning decisions, safe control actions, and improve the robustness of AI systems under uncertain scenarios and adversarial attacks.
This talk will present NNV, a novel framework for safety and robustness verification of deep neural networks (DNNs) and autonomous cyber-physical systems with learning-enabled components under adversarial attacks, sensing, actuating, and model uncertainties. The crux of our framework is reachability algorithms that can efficiently compute the reachable sets of a DNN or a neural network control system (NNCS). These reachable sets, containing all possible (output) states of the system under uncertainties, are used to verify if the system satisfies its desired properties. I will show the applicability of the NNV framework in a wide range of applications, such as verifying: 1) the safety of ACAS Xu networks, a set of neural networks for supervisory control of the airborne collision avoidance system X, 2) the robustness of the VGG16 and VGG19 perception networks, and 3) the safety of the advanced emergency braking system and adaptive cruise control system in autonomous driving.
2021-11-18 10:30:00 2021-11-18 11:30:00 America/Indiana/Indianapolis Verification of Learning-enabled Autonomous Cyber-Physical Systems Hoang-Dung Tran Assistant Professor of Computer Science and Engineering University of Nebraska-Lincoln POTR 234