Wearable Textile Input Device with Multimodal Sensing for Eyes-Free Mobile Interaction during Daily Activities

by | Apr 4, 2016

Authors: Sang Ho Yoon, Ke Huo, Karthik Ramani
Pervasive and Mobile Computing (2016), 33, 17-31
https://doi.org/10.1016/j.pmcj.2016.04.008

As pervasive computing is widely available during daily activities, wearable input devices which promote an eyes-free interaction are needed for easy access and safety. We propose a textile wearable device which enables a multimodal sensing input for an eyes-free mobile interaction during daily activities. Although existing input devices possess multimodal sensing capabilities with a small form factor, they still suffer from deficiencies in compactness and softness due to the nature of embedded materials and components. For our prototype, we paint a conductive silicone rubber on a single layer of textile and stitch conductive threads. From a single layer of the textile, multimodal sensing (strain and pressure) values are extracted via voltage dividers. A regression analysis, multi-level thresholding and a temporal position tracking algorithm are applied to capture the different levels and modes of finger interactions to support the input taxonomy. We then demonstrate example applications with interaction design allowing users to control existing mobile, wearable, and digital devices. The evaluation results confirm that the prototype can achieve an accuracy of ≥80% for demonstrating all input types, ≥88% for locating the specific interaction areas for eyes-free interaction, and the robustness during daily activity related motions. Multitasking study reveals that our prototype promotes relatively fast response with low perceived workload comparing to existing eyes-free input.

[Publication Link]

PMC_Journal

Sang Ho Yoon

Sang Ho Yoon

Sang Ho Yoon is currently working at Microsoft, Seattle, WA. He received his PhD at Purdue University and his B.S & M.S degrees from Carnegie Mellon University in 2008 with major in Mechanical Engineering and minor in Robotics. He worked at Research Department in LG Display & LG Electronics for 5 years. There, he involved in product development for consumer electronics as well as the futuristic products including 'Transparent & Public Display', 'Assistive/Rehabilitation Robot', and 'Smart Car User Interface'. He is particularly interested in applying novel sensing techniques to bring the new forms of input metaphor for Human-computer interaction. Areas of interest include wearable/tangible interface, sensing techniques & fabrication, and novel input device. Currently, his research aims at combining the state-of-art machine learning approaches with novel sensing technique to better support natural human-computer interaction. [Personal Website][LinkedIn]