Skip navigation

Continual and Incremental Learning

Research Summary

The capacity to learn new things without forgetting already present knowledge is innate to humans. However, all neural networks suffer from the problem of catastrophic forgetting, making it hard to grow networks and learn newer incoming data in a fluid manner. We are exploring techniques that utilize stochasticity or architectural enhancements that can enable lifelong learning.

Recent Publications:

  1. Allred, Jason M., and Kaushik Roy. "Stimulating STDP to Exploit Locality for Lifelong Learning without Catastrophic Forgetting." arXiv preprint arXiv:1902.03187 (2019).
  2. Roy, Deboleena, Priyadarshini Panda, and Kaushik Roy. "Tree-CNN: A hierarchical deep convolutional neural network for incremental learning." arXiv preprint arXiv:1802.05800 (2018).
  3. Panda, Priyadarshini, et al. "Asp: Learning to forget with adaptive synaptic plasticity in spiking neural networks." IEEE Journal on Emerging and Selected Topics in Circuits and Systems 8.1 (2017): 51-64.
  4. Sarwar, Syed Shakib, Aayush Ankit, and Kaushik Roy. "Incremental learning in deep convolutional neural networks using partial network sharing." arXiv preprint arXiv:1712.02719 (2017).

Current Students: Aayush Ankit, Jason Allred, Deboleena Roy