Task 003: Algorithms for Emerging Hardware

Event Date: February 18, 2021
Time: 11:00 am (ET) / 8:00am (PT)
Priority: No
School or Program: Electrical and Computer Engineering
College Calendar: Show
Gokul Krishnan, Arizona State University
Robust RRAM-based In-Memory Computing in Light of Model Stability
Abstract:
Resistive random-access memory (RRAM)-based in-memory computing (IMC) architectures offer an energy-efficient solution for DNN acceleration. However, the performance of RRAM-based IMC is limited by device non-idealities, ADC precision, and algorithm properties. To address this, in this work, first, we perform statistical characterization of RRAM device variation and temporal degradation from 300mm wafers of a fully integrated CMOS/RRAM 1T1R test chip at 65nm. Through this, we build a realistic foundation to assess the robustness. Second, we develop a cross-layer simulation tool that incorporates device, circuit, architecture, and algorithm properties under a single roof for system-level evaluation. Third, we propose a novel loss landscape-based DNN model selection for stability, which effectively tolerates device variations and achieves a post-mapping accuracy higher than that with 50% lower RRAM variations. We demonstrate the proposed method for different DNNs on both CIFAR-10 and CIFAR-100 datasets. Finally, we extend the model stability model selection to a novel variation-aware training (VAT) method for compressed (pruned and quantized) DNNs. The most stable VAT model provides the best post-mapping accuracy. Experimental evaluation of the method shows up to 19%, 21%, and 11% post-mapping accuracy improvement for our 65nm RRAM device, across various precision and sparsity, on CIFAR-10, CIFAR-100, and SVHN datasets, respectively.
 
Bio
Gokul Krishnan received his Bachelor of Technology degree in Electronics and Communication Engineering from Govt. Model Engineering College, Kochi, India in 2016. He is currently working towards a Ph.D. degree in Electrical Engineering at Arizona State University, guided by Prof. Yu Cao. His research interests include neuromorphic hardware design for deep learning training and inference, joint algorithm-architecture design for learning on-a-chip, model compression of DNNs, and performance modeling for CMOS and post-CMOS-based in-memory computing hardware architectures. He is the recipient of the Joseph A. Barkson Fellowship for 2020-21.