Wachs gives keynote at SAI CVC 2019

Photo of Juan Wachs at SAI-CVC
Juan Wachs spoke at SAI-CVC
An IE professor gave a keynote presentation on lifelong learning at the Science & Information (SAI) Computer Vision Conference (CVC) 2019 in Las Vegas, NV.

Juan Wachs, the James A. and Sharon M. Tompkins Rising Star Associate Professor in Industrial Engineering, spoke on "Towards Lifelong Learning Machines (L2L): How can zero shot learning lead to L2L?" at the April 25 & 26 conference.

VIDEO: https://youtu.be/R5xDHOcpU0s

SUMMARY:
One shot learning is a paradigm in learning theory that explores the ability of machines to recognize a certain class or category of objects from observing only a single instance of it. That means that the system needs to be able to generalize well enough to correctly categorize future observations of the same “thing” based on the fact that this new observation shares fundamental commonalities with the previous observed example. Classical machine learning approaches study this problem as a pure numerical challenge, in which the “better” algorithm is judged purely on better accuracy of classification. But this trivializes the power of the technique in real-world applications in which the context of the observation is absolutely critical to efficient generalization. In fact, without a context-dependent model of what is to be observed, one-shot learning would be an oxymoron. One shot gesture recognition is such a challenge in which teams compete to recognize hand/arm gestures after only one training instance. My work proposes a novel solution to the problem of one-shot recognition applied to human action, and specifically human gestures, using an integrative approach: a method to capture the variance of a gesture by looking at both the process of human cognition and the execution of the movement, rather than just looking at the outcome (the gesture itself). We achieve this from the perspectives of neuroscience and linguistics by employing EEG sensors on human observers to try to capture what they actually remember from the gesture. Further, we propose to leverage one shot learning (OSL) approaches coupled with conventional ZSL approaches to address and solve the problem of Hard Zero Shot Learning (HZSL). The main aim of HZSL is to be able to recognize unseen classes (zero examples) with limited (one or few examples per class) training information.

CVC 2019 was held April 25-26, 2019, to explore discovery, progress, and achievements related to Machine Vision, Image Processing, Data Science and Pattern Recognition. The participants benefited from direct interaction and discussions with world leaders in Computer Vision. The two-day conference included four keynote talks by esteemed speakers, 112 paper presentations, and eight poster presentations, with  networking opportunities to the participants from over 40 countries.