Task 005/006: Neural Fabrics/Programming and Evaluation Framework

Event Date: May 14, 2020
Priority: No
School or Program: Electrical and Computer Engineering
College Calendar: Show
Gokul Krishnan, Arizona State University
Interconnect-Aware Area and Energy Optimization for In-Memory Acceleration of DNNs

Abstract: In-memory computing effectively reduces latency and energy consumption for deep neural networks (DNNs). However, current architectures employ homogeneous tiles without recognizing the non-uniform features in computation/memory in DNNs. Consequently, this choice suffers from low hardware utilization and high network-on-chip (NoC) energy consumption. In this work, we first propose an area-aware optimization method to determine the optimal heterogeneous tile structure for SRAM and ReRAM-based DNN architectures that achieve up to 62% higher utilization and 78% reduction in area. Subsequently, we propose an energy aware NoC optimization method, which decreases the NoC energy consumption by 74% on average. We demonstrate that the combination of both methods enables 10% to 78% reduction in energy-area product for a wide range of DNNs. Finally, we propose an open-source, cross-layer benchmark tool for in-memory computing, covering circuit-level parasitics, on-chip and chip-to-chip communication, and architectural exploration.

 
Gokul Krishnan (S’20) received his B.Tech degree in electronics and communication engineering from Govt. Model Engineering College, Kochi, India in 2016. He is currently working towards the Ph.D. degree in electrical engineering at Arizona State University, Tempe, AZ, USA. His current research interests include neuromorphic hardware and FPGA design for deep learning, joint algorithm-architecture design for learning on-a-chip, model compression of DNNs and performance modelling for CMOS and post-CMOS-based hardware architectures.