Task: 008: Cognition on Compressed and Unreliable Data

Event Date: September 26, 2019
Time: 2:00pm ET/ 11:00am PT
Priority: No
School or Program: Electrical and Computer Engineering
College Calendar: Show
Xuankang Lin, Purdue University
ART: Abstraction Refinement-Guided Training for Provably Correct Neural Networks
Abstract:
Artificial neural networks (ANNs) have demonstrated remarkable utility in a variety of challenging machine learning applications. However, their complex architecture makes asserting any formal guarantees about their behavior difficult. Existing approaches to this problem typically consider verification as a post facto white-box process, one that reasons about the safety of an existing network through exploration of its internal structure.
 
In this talk, we present a novel learning framework ART that enables the construction of provably correct networks with respect to a broad class of safety properties, a capability that goes well-beyond existing approaches. Our key insight is that we can integrate an optimization-based abstraction refinement loop into the learning process that dynamically splits the input space from which training data is drawn, based on the efficacy with which such a partition enables safety verification.
 
We provide theoretical results that show that classical gradient descent methods used to optimize these networks can be seamlessly adopted to this framework to ensure soundness of our approach. Moreover, we empirically demonstrate that realizing soundness does not come at the price of accuracy, giving us a meaningful pathway for building both precise and correct networks.
 
Bio:
Xuankang Lin is a Ph.D. student in the Computer Science department at Purdue University, West Lafayette, IN, USA, working with Prof. Suresh Jagannathan and Dr. Roopsha Samanta. He received the B.S. degree in Software Engineering at Tongji University, Shanghai, China in 2013. His major research interest is to apply formal methods to reason about and enforce the safety of AI models.