2018-06-26 13:00:00 2018-06-26 14:00:00 America/Indiana/Indianapolis PhD Seminar - Maria Eugenia Cabrera "Learning Gestures for the First Time" GRIS 302

June 26, 2018

PhD Seminar - Maria Eugenia Cabrera

Event Date: June 26, 2018
Hosted By: Dr. Juan P. Wachs
Time: 1:00 - 2:00 PM
Location: GRIS 302
Contact Name: Cheryl Barnhart
Contact Phone: 4-5434
Contact Email: cbarnhar@purdue.edu
Open To: all
Priority: No
School or Program: Industrial Engineering
College Calendar: Show
“Learning Gestures for the First Time”

ABSTRACT

Humans are able to understand meaning intuitively and generalize from a single observation, as opposed to machines which require several examples to learn and recognize a new physical expression. This trait is one of the main roadblocks in natural human-machine interaction.  In the aim of natural interaction with machines, a framework must be developed to include the adaptability humans portray to understand gestures from a single observation.

This problem is known as one-shot gesture recognition, and it has been researched previously. Nevertheless, most approaches rely heavily on purely numerical solutions, and leave aside the mechanisms humans use to perceive and execute gestures. A framework is proposed in this dissertation to incorporate the processes associated with gesture perception and execution to the paradigm of one-shot gesture recognition. By observing how humans perceive and process gestures, we can learn how to artificially generate "human-like" gestures by machines. Two implemented approaches, referred to as the forward and backward approach, rely on different aspects of human motion to generate these artificial gesture examples. The forward approach leverages spatial variability centered on the human shoulder and the reach of the hand within that work envelope. Conversely, the backward approach leverages the kinematic model of the human arm and strategies for trajectory planning such as jerk minimization and energy expenditure.

Both approaches begin with the same subset of key points within the motion trajectory. These points, referred to as the gist of the gesture, can be used to capture large variability within each gesture while keeping the main traits of the gesture class. The performance of the proposed framework is evaluated in terms of independence from the classifying method used, efficiency in terms of comparing to traditional N-shot learning approaches, and coherence in recognition among machines and humans. In its application to one-shot learning, the proposed framework highlights using the way humans use their bodies as context for gesture recognition, by generating artificial gesture examples which capture human-like variation for all gesture classes.