Task 001/002: Neuro-Inspired Algorithms for Efficient and Lifelong Learning/Theoretical Underpinnings of Neuro-inspired Computing

Event Date: February 6, 2020
Priority: No
School or Program: Electrical and Computer Engineering
College Calendar: Show
Joel Dapello, Massachusetts Institute of Technology
On the Geometry and Orientation of Object Manifolds in Deep Nets and the Brain

Abstract: The mixed selectivity of a population of sensory neurons to different stimuli conditions (such as distance, pose, background, etc.) gives rise to an “object manifold”, defined as the set of population response vectors to the different stimuli that does not change perceived object identity. In this work, the amount of object identity information in a population of neurons is quantified through “classification capacity” defined by the maximum number of manifolds-per neuron, that can be classified into two classes by a hyperplane. This manifold classification capacity is dependent on the geometry of the manifolds in the population state space, in particular their radius and dimension. Other important statistical properties of the manifolds are their correlation structure, such as correlations between their centers or axes of variation. Together, these properties characterize how disentangled object representations are geometrically and computationally. Here we analyze the geometry and orientation of object manifolds in different regions of the macaque ventral stream, in response to objects with a variety of orientations and backgrounds, and compare these to the object manifolds from a variety of ImageNet-trained deep convolutional neural networks in response to the same stimuli. Our analysis shows that separable geometry emerges both in the tested deep nets and the macaque ventral stream. While overall trends are qualitatively similar between different models, the rates and trajectories of disentangling differs; furthermore, correlation measures between manifold centers and manifold axes indicate that the population responses in the macaque neural data is far more correlated than those in the deep networks.


Bio: Joel Dapello is a PhD candidate in the School of Engineering and Applied Sciences at Harvard University, working in Jim DiCarlo’s lab at MIT. His research interests are at the interface of biological and artificial intelligence, centering broadly around representation and processing of information in both biological and artificial neural networks. Before starting his PhD, Joel was the founding engineer at BioBright, a biotech company devoted to improving the scientific workflow and increasing reproducibility in biological and medical research; simultaneously, as a research fellow in Ed Boyden’s group, he helped design novel modalities for interfacing with the brain. Joel received his Bachelor’s degree from Hampshire College in cellular and molecular neuroscience.