Task 001/002: Neuro-inspired Algorithms for Efficient and Lifelong Learning/Theoretical Underpinnings of Neuro-inspired Computing

Event Date: April 23, 2020
Priority: No
College Calendar: Show
Nikunj Saunshi, Princeton University
A Sample Complexity Separation between Non-Convex and Convex Meta-Learning

Abstract: One popular trend in meta-learning is to learn from many training tasks a common initialization for a gradient-based method that can be used to solve a new task with few samples. The theory of meta-learning is still in its early stages, with several recent learning-theoretic analyses of methods such as Reptile [Nichol et al., 2018] being for convex models. This work shows that convex-case analysis might be insufficient to understand the success of meta-learning, and that even for non-convex models it is important to look inside the optimization black-box, specifically at properties of the optimization trajectory. We construct a simple meta-learning instance that captures the problem of one-dimensional subspace learning. For the convex formulation of linear regression on this instance, we show that the new task sample complexity of any initialization-based meta-learning algorithm is \Omega(d), where d is the input dimension. In contrast, for the non-convex formulation of a two layer linear network on the same instance, we show that both Reptile and multi-task representation learning can have new task sample complexity of O(1), demonstrating a separation from convex meta-learning. Crucially, analyses of the training dynamics of these methods reveal that they can meta-learn the correct subspace onto which the data should be projected.


Bio: Nikunj Saunshi is a PhD student in the Department of Computer Science at Princeton University working with Prof. Sanjeev Arora. His research focuses on the theoretical study of transfer learning, unsupervised learning and applications to Natural Language Processing.