Task 019: Robustness of Neural Systems

Event Date: May 6, 2021
Time: 11:00 am (ET) / 8:00am (PT)
Priority: No
College Calendar: Show
Charbel Sakr, University of Illinois
Signal Processing Methods to Enhance the Energy Efficiency of In-Memory Computing Architectures
This talk presents methods for efficient neural network implementation using in-memory computing (IMC) architectures. First, an optimal clipping criterion (OCC) is derived to minimize the precision of column analog-to-digital converters (ADCs) and thereby meet tight area constraints. The OCC clips the signal swing on the bitline to better utilize the ADC quantization levels. For a Gaussian distributed signal, the OCC is shown to reduce ADC precision requirements by 3 bit and a 14 dB gain in the signal-to-quantization noise ratio (SQNR) over the commonly used full range quantizer. Next, quantization noise analysis of the input-sliced weight-parallel (ISWP) architecture is presented accounting for the accumulated quantization noise of the column ADCs. It is shown, contrary to popular belief, dot-products computed using fully-sliced inputs exhibit similar accuracy compared to bit-sliced input while being significantly more energy-efficient. Armed with OCC and ISWP quantization noise analysis, efficient implementation of neural networks onto IMC is demonstrated. When mapping VGG-9, ResNet-18, and AlexNet, we show that ADC precision can be lowered by 2-to-3 bit and energy consumption can be reduced by an order of magnitude compared to common methods, while maintaining accuracy.
Charbel Sakr received the PhD (2021) and MS (2017) degrees from the University of Illinois where he worked with Professor Naresh Shanbhag in the Coordinate Sciences Laboratory. His research interests are in resource-constrained machine learning, with a focus on analysis and implementation of reduced precision algorithms and models.