PuppetX: A Framework for Gestural Interactions With User Constructed Playthings

by | May 28, 2014

Authors: Saikat Gupta, Sujin Jang, and Karthik Ramani
Proceedings of the ACM Conference on Advanced Visual Interfaces, May 27-30, Como, Italy. pp. 73-80, 2014

We present PuppetX, a framework for both constructing playthings and playing with them using spatial body and hand gestures. This framework allows users to construct various playthings similar to puppets with modular components representing basic geometric shapes. It is topologically-aware, i.e. depending on its configuration; PuppetX automatically determines its own topological construct. Once the plaything is made the users can interact with them naturally via body and hand gestures as detected by depth-sensing cameras. This gives users the freedom to create playthings using our components and the ability to control them using full body interactions. Our framework creates affordances for a new variety of gestural interactions with physically constructed objects. As its by-product, a virtual 3D model is created, which can be animated as a proxy to the physical construct. Our algorithms can recognize hand and body gestures in various configurations of the playthings. Through our work, we push the boundaries of interaction with user-constructed objects using large gestures involving the whole body or fine gestures involving the fingers. We discuss the results of a study to understand how users interact with the playthings and conclude with a demonstration of the abilities of gestural interactions with PuppetX by exploring a variety of interaction scenarios.

FrameworkExample

PuppetX Framework A) Components developed B) Puppets created using these components.

Scenario12

A) Scenario 1 – Play with body gestures. B) Scenario 2 – Play with nger gestures. Red arrow shows the depth sensors.

Sujin

Sujin

Sujin Jang is currently working at Motorola, Chicago, IL. He received his Ph.D. from the School of Mechanical Engineering at Purdue University in August 2017. His research work at the C-Design Lab broadly involved human-computer interaction, visual analytics, machine learning, and robotics. His research has focused on creating methodologies and principles for effective use of gestures in HCI. In particular, he has developed methods to analyze and exploit human gesture based on visual analytics integrating machine learning and information visualization; biomechanical arm fatigue analysis; a gestural user interface for human-robot interaction; and an interactive clustering and collaborative filtering approach for hand pose estimation. He also has served as a teaching assistant for ME 444: Computer-aided design and rapid prototyping, and received the Estus H. and Vashti L. Magoon Award for Teaching Excellence in 2015. [Personal Website][LinkedIn]