Skip navigation

Energy Efficient Deep Learning

Research Summary

With the increasing availability of compute power, state of the art CNNs are growing rapidly in size, making them prohibitive to deploy in power-constrained environments. To enable ubiquitous use of deep learning techniques, we are looking into techniques that reduce the size of a model with minimal degradation in accuracy. Some of the algorithmic techniques we have devised to tackle this include identifying redundancy among the weights of a model in a single shot, discretized leaning and inference methods, and strategies that change network architecture to enable efficient inference. We have also developed hardware driven approximation methods that leverage the inherent error resilience of CNNs for model compression.

Recent Publications:

  1. Garg, Isha, Priyadarshini Panda, and Kaushik Roy. "A low effort approach to structured CNN design using PCA." arXiv preprint arXiv:1812.06224 (2018).
  2. Panda, Priyadarshini, Abhronil Sengupta, and Kaushik Roy. "Conditional deep learning for energy-efficient and enhanced pattern recognition." 2016 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2016.
  3. Chakraborty, Indranil, et al. "PCA-driven Hybrid network design for enabling Intelligence at the Edge." arXiv preprint arXiv:1906.01493 (2019).
  4. P. Panda, A. Ankit, P. Wijesinghe, and K. Roy, “FALCON: Feature Driven Selective Classification for Energy-Efficient Image Recognition,” IEEE Transactions on CAD, 2016.
  5. Sarwar, et al. "Energy-efficient neural computing with approximate multipliers." ACM Journal on Emerging Technologies in Computing Systems (JETC) 14.2 (2018): 16.
Current Students: Jason Allred, Aayush Ankit, Indranil Chakraborty, Isha Garg, Deboleena Roy