Task 2777.001/2777.002: On Adversarial Susceptibility and Defense of Neural Networks
|Event Date:||May 16, 2019|
|Time:||2:00pm ET/ 11:00am PT
|School or Program:||Electrical and Computer Engineering
While artificial intelligence grows in this digital age bringing large-scale social disruption, there is a growing security concern in the research community about the vulnerabilities of neural networks towards adversarial attacks. To that end, in this talk, I will describe discretization-based solutions, that are traditionally used for reducing the resource utilization of deep neural networks, for adversarial robustness. I will also present a novel noise-learning training strategy as an adversarial defense method. I will also delve into principal component analysis that enables us to visualize and understand the relationship between clean and adversarial data, and yield metrics which can be generalized to all adversarial defenses.
Priyadarshini Panda received her B.E. degree in Electrical & Electronics Engineering and MSc. degree in Physics from BITS Pilani, India, in 2013 where she was the recipient of gold medal in Physics for academic excellence. In 2013, she joined Intel, Bangalore, India where she worked on RTL design for graphics power management. Currently she is pursuing a PhD under the guidance of Prof. Kaushik Roy. She worked as a research intern in Intel Labs, Oregon in summer 2017. Her research interests lie in neuromorphic computing, specifically, developing scalable energy-efficient design methodologies for deep learning applications (recognition, inference, analytics), novel supervised/unsupervised learning algorithms for deep spiking/dynamic reservoir networks for spatio-temporal data processing, developing novel architectures for new computing models (for planning/decision making etc.) and theoretical understanding to validate the robustness of deep learning and spiking networks.