Task 008 - Cognition on Compressed and Unreliable Data

Event Date: June 4, 2020
Priority: No
School or Program: Electrical and Computer Engineering
College Calendar: Show
Kiran Lekkala, University of Southern California
Visual Attention for Multi-Task Meta-learning
Abstract: Recent multi-task methods in Computer vision try to leverage inter-modal features by training on multiple modalities/Tasks together, where these 'Tasks' are assumed to be fixed. Since data is prone to domain and task shift, it is essential to consider these shifts, while Multi-Task learning. We present a scenario involving simultaneous learning of multiple Tasks and adapting to unseen task/domain distributions within those high-level Tasks. We then propose an attention mechanism which benefits models trained on multiple Tasks, like that of the presented scenario. Our approach is based on concentrating on network filters which have more relevance to a particular task. To achieve this, we enable the attention module to learn task representations during training which could be used to obtain better representations for unseen tasks. 
 
Bio: Kiran Lekkala is a 2nd year PhD student at the University of Southern California working with Dr Laurent Itti. His research interests are  Adaptive Representation Learning and Model-based Imitation Learning for Robotics.