InstruMentAR: Auto-Generation of Augmented Reality Tutorials for Operating Digital Instruments Through Recording Embodied

by | Feb 24, 2023

Authors: Ziyi Liu, Zhengzhe Zhu, Enze Jiang, Feichi Huang, Ana Villanueva, Tianyi Wang, Xun Qian, Karthik Ramani
In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems

Augmented Reality tutorials, which provide necessary context by directly superimposing visual guidance on the physical referent, represent an effective way of scaffolding complex instrument operations. However, current AR tutorial authoring processes are not seamless as they require users to continuously alternate between operating instruments and interacting with virtual elements. We present InstruMentAR, a system that automatically generates AR tutorials through recording user demonstrations. We design a multimodal approach that fuses gestural information and hand-worn pressure sensor data to detect and register the user’s step-by-step manipulations on the control panel. With this information, the system autonomously generates virtual cues with designated scales to respective locations for each step. Voice recognition and background capture are employed to automate the creation of text and images as AR content. For novice users receiving the authored AR tutorials, we facilitate immediate feedback through haptic modules. We compared InstruMentAR with traditional systems in the user study.


Ziyi Liu

Ziyi Liu

Ziyi Liu has been a Ph.D. student in the School of Mechanical Engineering at Purdue University since Fall 2021. He is conducting research under Professor Karthik Ramani's Convergence Design Lab. He received his Master's and Bachelor's degrees in Mechanical Engineering at Purdue University. His current research focuses on innovative human-computer interactions utilizing AR/VR, and AI in authoring/tutoring systems.