Task 001 - Neuro-inspired Algorithms for Efficient and Lifelong Learning

Event Date: June 11, 2020
Priority: No
School or Program: Electrical and Computer Engineering
College Calendar: Show
Gobinda Saha, Purdue University
Structured Compression and Sharing of Representational Space for Continual Learning
Abstract: Humans learn adaptively and efficiently throughout their lives. However, incrementally learning tasks causes artificial neural networks to overwrite relevant information learned about older tasks, resulting in ‘Catastrophic Forgetting’. Efforts to overcome this phenomenon utilize resources poorly, for instance, by needing to save older data or parametric importance scores or by growing the network architecture. We propose an algorithm that enables a network to learn continually and efficiently by partitioning the learnt space into a Core space, that serves as the condensed knowledge base over previously learned tasks, and a Residual space, which is akin to a scratch space for learning the current task. After learning each task, the Residual is analyzed for redundancy, both within itself and with the learnt Core space, and a minimal set of dimensions is added to the Core space. The remaining Residual is freed up for learning the next task. We evaluate our algorithm on P-MNIST, CIFAR-10 and CIFAR-100 datasets and achieve comparable accuracy to the state-of-the-art methods while overcoming the problem of catastrophic forgetting. Additionally, our algorithm is suited for practical use due to the structured nature of the resulting architecture, which gives us up to 5x improvement in energy efficiency during inference over the current state-of-the-art.
 
Bio: Gobinda Saha received the B.Sc. and M.Sc. degrees in electrical and electronic engineering from Bangladesh University of Engineering and Technology, Dhaka, Bangladesh, in 2013 and 2015, respectively. Currently, he is pursuing Ph.D. degree at Purdue University under the guidance of Prof. Kaushik Roy. In summer 2019, he worked as a Research Intern for the memory solution team at GlobalFoundries. His primary research interests include efficient algorithm design for continual learning, representation learning and meta-learning.