Task 004: Neural Primitives
|Event Date:||January 20, 2022|
|Time:||11:00 am (ET) / 8:00am (PT)
Utkarsh Saxena, Purdue University Towards ADC-Less Compute-In-Memory Accelerators for Energy Efficient Deep Learning
ABSTRACT: Compute-in-Memory (CiM) hardware has shown great potential in accelerating Deep Neural Networks (DNNs). However, most CiM accelerators perform matrix-vector multiplication (MVM) operation in the analog domain and rely on costly analog to digital converters (ADCs) to enable communication in the digital domain. Additionally, the need to accurately capture the dynamic range of analog partial-sums requires high precision ADCs. Consequently, analyses of energy breakdown within CiM accelerators show that a significant amount of compute energy is consumed by the ADCs. In this work, we propose a hardware-software co-design approach to reduce the aforementioned ADC costs through partial-sum quantization and thereby improve the energy efficiency of CiM accelerators. Specifically, we replace ADCs with 1-bit sense amplifiers and develop a quantization aware training methodology to compensate for the loss in representation ability. We show that the proposed ADC-less DNN model achieves 1.1x-9.6x reduction in energy consumption while maintaining accuracy within 1\% of the DNN model without partial-sum quantization on CIFAR-10 and MNIST dataset.
BIO: Utkarsh is a third year PhD student working under Prof. Kaushik Roy at Purdue University. His research interests include hardware software co-design for deep learning applications. Currently he is working on Compute-in-memory hardware for energy efficient acceleration of machine learning algorithms.