Xun Qian

Xun Qian

Xun Qian is a Ph.D. student in the School of Mechanical Engineering at Purdue University since Fall 2018. Before joining the C Design Lab, he received his Master's degree in Mechanical Engineering at Cornell University, and Bachelor's degree in Mechanical Engineering at University of Science and Technology Beijing. His current research interests lie in development of novel human-computer interactions leveraging AR/VR/MR, Deep Learning, and Cloud Computing. For more details, please visit his personal website at xun-qian.com
MechARspace: An Authoring System Enabling Bidirectional Binding of Augmented Reality with Toys in Real-time

MechARspace: An Authoring System Enabling Bidirectional Binding of Augmented Reality with Toys in Real-time

Zhengzhe Zhu, Ziyi Liu, Tianyi Wang, Youyou Zhang, Xun Qian, Pashin Farsak Raja, Ana Villanueva, Karthik Ramani
In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (pp. 1-16).

Augmented Reality (AR), which blends physical and virtual worlds, presents the possibility of enhancing traditional toy design. By leveraging bidirectional virtual-physical interactions between humans and the designed artifact, such AR-enhanced...

ARnnotate: An Augmented Reality Interface for Collecting Custom Dataset of 3D Hand-Object Interaction Pose Estimation

ARnnotate: An Augmented Reality Interface for Collecting Custom Dataset of 3D Hand-Object Interaction Pose Estimation

Xun Qian, Fengming He, Xiyun Hu, Tianyi Wang, Karthik Ramani
In Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology (pp. 1-14)

Vision-based 3D pose estimation has substantial potential in hand-object interaction applications and requires user-specified datasets to achieve robust performance. We propose ARnnotate, an Augmented Reality (AR) interface enabling end-users to...

ScalAR: Authoring Semantically Adaptive Augmented Reality Experiences in Virtual Reality

ScalAR: Authoring Semantically Adaptive Augmented Reality Experiences in Virtual Reality

Xun Qian, Fengming He, Xiyun Hu, Tianyi Wang, Ananya Ipsita, and Karthik Ramani
In the Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems

Augmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an...

GesturAR publication receives the Best Paper Honorable Mention Award at UIST 2021!

GesturAR: An Authoring System for Creating Freehand Interactive Augmented Reality Applications

Tianyi Wang, Xun Qian, Fengming He, Xiyun Hu, Yuanzhi Cao, Karthik Ramani
In The 34th Annual ACM Symposium on User Interface Software and Technology (UIST '21)

Freehand gesture is an essential input modality for modern Augmented Reality (AR) user experiences. However, developing AR applications with customized hand interactions remains a challenge for end-users. Therefore, we propose GesturAR, an...

ProcessAR: An augmented reality-based tool to create in-situ procedural 2D/3D AR Instructions

ProcessAR: An augmented reality-based tool to create in-situ procedural 2D/3D AR Instructions

Subramanian Chidambaram, Hank Huang, Fengming He, Xun Qian, Ana M Villanueva, Thomas S Redick, Wolfgang Stuerzlinger, Karthik Ramani
In Proceedings of the Designing Interactive Systems Conference

Augmented reality (AR) is an efficient form of delivering spatial information and has great potential for training workers. However, AR is still not widely used for such scenarios due to the technical skills and expertise required to create...

LightPaintAR: Assist Light Painting Photography with Augmented Reality

LightPaintAR: Assist Light Painting Photography with Augmented Reality

Tianyi Wang, Xun Qian, Fengming He, Karthik Ramani
In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems

Light painting photos are created by moving light sources in mid-air while taking a long exposure photo. However, it is challenging for novice users to leave accurate light traces without any spatial guidance. Therefore, we present LightPaintAR, a...

AdapTutAR: An Adaptive Tutoring System for Machine Tasks in Augmented Reality

AdapTutAR: An Adaptive Tutoring System for Machine Tasks in Augmented Reality

Gaoping Huang*, Xun Qian*, Tianyi Wang, Fagun Patel, Maitreya Sreeram, Yuanzhi Cao, Karthik Ramani, and Alexander J. Quinn
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems

Modern manufacturing processes are in a state of flux, as they adapt to increasing demand for flexible and self-configuring production. This poses challenges for training workers to rapidly master new machine operations and processes, i.e. machine...

CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications

CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications

Tianyi Wang*, Xun Qian*, Fengming He, Xiyun Hu, Ke Huo, Yuanzhi Cao, Karthik Ramani
In Proceedings of the 2020 UIST 33rd ACM User Interface Software and Technology Symposium

Recognition of human behavior plays an important role in context-aware applications. However, it is still a challenge for end-users to build personalized applications that accurately recognize their own activities. Therefore, we present CAPturAR,...

Vipo: Spatial-Visual Programming with Functions for Robot-IoT Workflows

Vipo: Spatial-Visual Programming with Functions for Robot-IoT Workflows

Gaoping Huang, Pawan S. Rao, Meng-Han Wu, Xun Qian, Shimon Y. Nof, Karthik Ramani, and Alexander J. Quinn
In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems

Mobile robots and IoT (Internet of Things) devices can increase productivity, but only if they can be programmed by workers who understand the domain. This is especially true in manufacturing. Visual programming in the spatial context of the...