Task 019: Robustness of Neural Systems
|Event Date:||July 22, 2021|
|Time:||11:00 am (ET) / 8:00am (PT)
Abhiroop Bhattacharjee, Yale University Non-ideality Driven Techniques for Energy-Efficient and Robust Implementation of Neural Networks on Xbars
ABSTRACT: Memristive crossbars based on Non-volatile-memory (NVM) devices can implement Deep Neural Networks (DNNs) in an energy-efficient and compact manner. However, they suffer from non-idealities (such as, interconnect parasitics, device variations, etc.) introduced by the circuit topology that degrades computational accuracy of the mapped DNNs. Furthermore, DNNs are shown to be prone to adversarial attacks leading to severe security threats in their large-scale deployment. This talk revolves around strategies that increase the feasibility of low conductance (or high resistance) synapses in crossbars upon mapping DNNs to reduce the impact of crossbar non-idealities and bring in better adversarial robustness and energy-efficiency during inference. Based on this, we divide this talk into two broad categories. For the first category, we propose SwitchX mapping of the binary weights of a Binarized Neural Network (BNN) onto crossbars such that the crossbar comprises more low conductance synapses. This decreases the overall output dot-product current, thereby leading to energy savings in crossbars. Furthermore, the BNNs mapped onto crossbars with SwitchX also exhibit better robustness against adversarial attacks than the standard crossbar mapped BNNs. For the second category, we focus on DNNs mapped onto 1T-1R crossbars that help mitigate sneak paths. Since, 1T-1R crossbars operating at lower transistor gate-voltages (Vg) are shown to consume less energy, we first analyze the non-linear effects of operating such 1T-1R crossbar synapses at low Vg. Then, we propose a novel Non-linearity Aware Training (NEAT) of DNNs to mitigate the non-linearities. Specifically, we identify the range of network weights, which can be mapped into the 1T-1R cell within the linear operating region of the transistor. Thereafter, we regularize the weights of neural networks to exist within the linear operating range by using an iterative training algorithm. Our iterative training significantly recovers the classification accuracy drop caused by the non-linearities. NEAT not only enables an energy-efficient and accurate mapping of DNNs but also boosts the proportion of low conductance synapses in the crossbars. Thus, like SwitchX, DNNs on 1T-1R crossbars trained with NEAT are shown to exhibit greater adversarial robustness than the ones mapped normally onto standard 1R crossbars.
Bio: Abhiroop Bhattacharjee received B.E. in Electrical and Electronics from Birla Institute of Technology and Science Pilani, India, in 2020. He joined Yale University, USA, in 2020 as a Ph.D. student in the Electrical Engineering department. He is currently working in the Intelligent Computing Lab at Yale University under the supervision of Prof. Priyadarshini Panda. Prior to joining Yale, he worked as a guest researcher in the Chair for Processor Design, TU Dresden, Germany, in 2020 and as a research intern in the Institute of Materials in Electrical Engineering-I, RWTH Aachen University, Germany, in 2019. His research interests lie in the areas of adversarial security and compute in-memory architectures for neuromorphic circuits.