C-BRIC JUMP e-Workshop
|Event Date:||November 2, 2021|
|Time:||11:00 am (ET) / 8:00am (PT) AND
8:00 pm (ET) / 5:00 pm (PT)
Abstract: As we are rapidly adopting deep learning to all technology that surrounds us, there is a need to look beyond energy-accuracy tradeoff and bring in robustness into the design space exploration. Research has started in earnest to understand the role of efficiency driven optimization techniques towards lending adversarial robustness to deep neural networks (DNNs). In this talk, we will explore some hardware centric techniques (in the context of non-volatile memory (NVM) based crossbars) that can effectively improve the adversarial robustness of neural systems while maintaining the performance and computational efficiency. Specifically, we will look at techniques that allow you to control the intrinsic non-idealities (device, circuit, parasitics, transistor related non-linearity) in NVM crossbars and boost the adversarial robustness of different DNNs mapped onto them by >10% than their standard software implementation. Further, we will talk about a technique called, DetectX, that allows adversarial detection on hardware using current signatures during real-time inference. We combine DetectX with the NeuroSim crossbar evaluation platform and demonstrate 10x-25x more energy-efficient and higher detection scores than state-of-the-art neural network-based software detection approaches. In the second half of the talk, we will understand the role of event-driven computations and spiking dynamics for tackling robustness for image classification and complex segmentation tasks as well as to preserve privacy during distributed learning. While a considerable amount of attention has been given to optimizing different training algorithms for spiking neural networks (SNNs), the development of explainability still is at its infancy. I will talk about our recent work on a bio-plausible visualization tool for SNNs, called Spike Activation Map (SAM). With SAM, I will highlight the differentiating factors between ANNs vs. SNNs and demonstrate why SNNs tend to be more robust. We will conclude our talk with some recent learnings from CBRIC and discuss future directions to enable robust-efficient-and-accurate neural systems for autonomous applications.