Rx-Caffe: Framework for evaluating and training Deep Neural Networks on Resistive Crossbars

Event Date: October 25, 2018
Priority: No
College Calendar: Show
Shubham Jain, Purdue University
Task 006, Programming and Evaluation Framework
2 pm EST/12 pm MDT/11 am PDT

Bio: Shubham Jain is currently a Ph.D. student in the School of Electrical and Computer Engineering, Purdue University. His research interests include exploring circuit and architectural techniques for emerging post-CMOS devices and computing paradigms such as spintronics, approximate computing, neuromorphic computing, and deep learning. He has a B.Tech (Hons.) degree in Electronics and Electrical Communication Engineering from the Indian Institute of Technology (IIT), Kharagpur, India, in 2012. After graduation, he worked for two years in Qualcomm, Bangalore, India. He also worked as a summer intern at IBM T.J Watson Research Center, Yorktown Heights, in 2017 and 2018. He is a recipient of Mitacs Globalink scholarship from Mitacs, in 2011. He was awarded the Andrews Fellowship from Purdue University, in 2014. He is also a recipient of the best paper award in DAC, 2018. 


Abstract: Deep Neural Networks (DNNs) are widely used to perform machine learning tasks in speech, image, and natural language processing. The high computation and storage demands of DNNs have led to a need for energy-efficient implementations. Resistive crossbar systems have emerged as promising candidates due to their ability to compactly and efficiently realize the primitive DNN operation, viz., vector-matrix multiplication. However, in practice, the functionality of resistive crossbars may deviate considerably from the ideal abstraction due to the device and circuit level non-idealities such as driver resistance, sensing resistance, sneak paths, and interconnect parasitics. Although DNNs are somewhat tolerant to errors in their computations, it is still essential to evaluate the impact of errors introduced due to crossbar non-idealities on DNN accuracy. Unfortunately, device and circuit-level models are not feasible to use in the context of large-scale DNNs with 2.6-15.5 billion synaptic connections.

In this work, we present a fast and accurate simulation framework to enable training and evaluation of large-scale DNNs on resistive crossbar based hardware fabrics. We propose a Fast Crossbar Model (FCM) that accurately captures the errors arising due to non-idealities while being five orders of magnitude faster than circuit simulation. We develop Rx-Caffe, an enhanced version of the popular Caffe machine learning software framework to train and evaluate DNNs on crossbars. We use Rx-Caffe to evaluate large-scale image recognition DNNs designed for the ImageNet dataset. Our experiments reveal that crossbar non-idealities can significantly degrade DNN accuracy by 9.6%-32%. To the best of our knowledge, this work is the first evaluation of the accuracy for large-scale DNNs on resistive crossbars and highlights the need for further efforts to address the impact of non-idealities.