Abstract: As pervasive computing is widely available during daily activities, wearable input devices which promote an eyes-free interaction are needed for easy access and safety. We propose a textile wearable device which enables a multimodal sensing input for an eyes-free mobile interaction during daily activities. Although existing input devices possess multimodal sensing capabilities with a small form factor, they still suffer from deficiencies in compactness and softness due to the nature of embedded materials and components. For our prototype, we paint a conductive silicone rubber on a single layer of textile and stitch conductive threads. From a single layer of the textile, multimodal sensing (strain and pressure) values are extracted via voltage dividers. A regression analysis, multi-level thresholding and a temporal position tracking algorithm are applied to capture the different levels and modes of finger interactions to support the input taxonomy. We then demonstrate example applications with interaction design allowing users to control existing mobile, wearable, and digital devices. The evaluation results confirm that the prototype can achieve an accuracy of ≥80% for demonstrating all input types, ≥88%for locating the specific interaction areas for eyes-free interaction, and the robustness during daily activity related motions. Multitasking study reveals that our prototype promotes relatively fast response with low perceived workload comparing to existing eyes-free input.
Sang Ho Yoon, Ke Huo, Karthik Ramani
Wearable Textile Input Device with Multimodal Sensing for Eyes-Free Mobile Interaction during Daily Activities
Pervasive and Mobile Computing (2016), 33, 17-31