Computer vision (CV) algorithms require large annotated datasets that are often labor-intensive and expensive to create. We propose AnnotateXR, an extended reality (XR) workflow to collect various high-fidelity data and auto-annotate it in a single demonstration. AnnotateXR allows users to align virtual models over physical objects, tracked with six degrees-of-freedom (6DOF) sensors. AnnotateXR utilizes a hand tracking capable XR head-mounted display coupled with 6DOF information and collision detection to enable algorithmic segmentation of different actions in videos through its digital twin. The virtual–physical mapping provides a tight bounding volume to generate semantic segmentation masks for the captured image data. Alongside supporting object and action segmentation, we also support other dimensions of annotation required by modern CV, such as human–object, object–object, and rich 3D recordings, all with a single demonstration. Our user study shows AnnotateXR produced over 112,000 annotated data points in 67 min.
AnnotateXR: An Extended Reality Workflow for Automating Data Annotation to Support Computer Vision Applications
Authors: Subramanian Chidambaram*, Rahul Jain*, Sai Swarup Reddy, Asim Unmesh, Karthik Ramani
J. Comput. Inf. Sci. Eng. Dec 2024, 24(12): 121001 (13 pages)
https://doi.org/10.1115/1.4066180
Rahul Jain
Rahul Jain has been a Ph.D. student in the School of Electrical and Computer Engineering at Purdue University since Spring 2022. He is conducting research under Professor Karthik Ramani’s Convergence Design Lab. He received his Master’s in Electrical and Computer Engineering at Purdue University and Bachelor’s in Civil Engineering at Indian Institute of Technology (IIT), Patna. His current research focuses on area of Computer Vision, Machine Learning and human-computer interactions utilizing AR/VR.