Subramanian Chidambaram

Subramanian Chidambaram

Subramanian Chidambaram is a Ph.D student in the School of Mechanical Engineering at Purdue University. Prior to joining the C Design lab, he obtained his Master’s from the School of Aeronautics and Astronautics also from Purdue and a Bachelor’s degree in Mechanical engineering from Vellore Institute of Technology, India. His current research interest involves exploring Human-Computer Interactions and Digital interface development for Augmented Reality (AR), embedding Artificial Intelligence (AI) in AR interface development, Skill transfer through AR, tangible interfaces and geometric modeling. In the past, he has also conducted research on developing tools that guide novices to design analyze and fabricate functional load-bearing structures.
ProcessAR: An augmented reality-based tool to create in-situ procedural 2D/3D AR Instructions

ProcessAR: An augmented reality-based tool to create in-situ procedural 2D/3D AR Instructions

Subramanian Chidambaram, Hank Huang, Fengming He, Xun Qian, Ana M Villanueva, Thomas S Redick, Wolfgang Stuerzlinger, Karthik Ramani
In Proceedings of the Designing Interactive Systems Conference

Augmented reality (AR) is an efficient form of delivering spatial information and has great potential for training workers. However, AR is still not widely used for such scenarios due to the technical skills and expertise required to create interactive AR content. We developed ProcessAR, an AR-based system to develop 2D/3D content that captures subject matter expert’s (SMEs) environment object interactions in situ. The design space for ProcessAR was identified from formative interviews with AR programming experts and SMEs, alongside a comparative design study with SMEs and novice users. To...

read more
LightPaintAR: Assist Light Painting Photography with Augmented Reality

LightPaintAR: Assist Light Painting Photography with Augmented Reality

Tianyi Wang, Xun Qian, Fengming He, Karthik Ramani
In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems

Light painting photos are created by moving light sources in mid-air while taking a long exposure photo. However, it is challenging for novice users to leave accurate light traces without any spatial guidance. Therefore, we present LightPaintAR, a novel interface that leverages augmented reality (AR) traces as a spatial reference to enable precise movement of the light sources. LightPaintAR allows users to draft, edit, and adjust virtual light traces in AR, and move light sources along the AR traces to generate accurate light traces on photos. With LightPaintAR, users can light paint complex...

read more
AdapTutAR: An Adaptive Tutoring System for Machine Tasks in Augmented Reality

AdapTutAR: An Adaptive Tutoring System for Machine Tasks in Augmented Reality

Gaoping Huang*, Xun Qian*, Tianyi Wang, Fagun Patel, Maitreya Sreeram, Yuanzhi Cao, Karthik Ramani, and Alexander J. Quinn
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems

Modern manufacturing processes are in a state of flux, as they adapt to increasing demand for flexible and self-configuring production. This poses challenges for training workers to rapidly master new machine operations and processes, i.e. machine tasks. Conventional in-person training is effective but requires time and effort of experts for each worker trained and not scalable. Recorded tutorials, such as video-based or augmented reality (AR), permit more efficient scaling. However, unlike in-person tutoring, existing recorded tutorials lack the ability to adapt to workers’ diverse...

read more
CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications

CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications

Tianyi Wang*, Xun Qian*, Fengming He, Xiyun Hu, Ke Huo, Yuanzhi Cao, Karthik Ramani
In Proceedings of the 2020 UIST 33rd ACM User Interface Software and Technology Symposium

Recognition of human behavior plays an important role in context-aware applications. However, it is still a challenge for end-users to build personalized applications that accurately recognize their own activities. Therefore, we present CAPturAR, an in-situ programming tool that supports users to rapidly author context-aware applications by referring to their previous activities. We customize an AR head-mounted device with multiple camera systems that allow for non-intrusive capturing of user's daily activities. During authoring, we reconstruct the captured data in AR with an animated avatar...

read more
Vipo: Spatial-Visual Programming with Functions for Robot-IoT Workflows

Vipo: Spatial-Visual Programming with Functions for Robot-IoT Workflows

Gaoping Huang, Pawan S. Rao, Meng-Han Wu, Xun Qian, Shimon Y. Nof, Karthik Ramani, and Alexander J. Quinn
In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems

Mobile robots and IoT (Internet of Things) devices can increase productivity, but only if they can be programmed by workers who understand the domain. This is especially true in manufacturing. Visual programming in the spatial context of the operating environment can enable mental models at a familiar level of abstraction. However, spatial-visual programming is still in its infancy; existing systems lack IoT integration and fundamental constructs, such as functions, that are essential for code reuse, encapsulation, or recursive algorithms. We present Vipo, a spatial-visual programming system...

read more
An Exploratory Study of Augmented Reality Presence for Tutoring Machine Tasks

An Exploratory Study of Augmented Reality Presence for Tutoring Machine Tasks

Yuanzhi Cao, Xun Qian, Tianyi Wang, Rachel Lee, Ke Huo, Karthik Ramani
In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems

Machine tasks in workshops or factories are often a compound sequence of local, spatial, and body-coordinated human-machine interactions. Prior works have shown the merits of video-based and augmented reality (AR) tutoring systems for local tasks. However, due to the lack of a bodily representation of the tutor, they are not as effective for spatial and body-coordinated interactions. We propose avatars as an additional tutor representation to the existing AR instructions. In order to understand the design space of tutoring presence for machine tasks, we conduct a comparative study with 32...

read more
GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality

GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality

Yuanzhi Cao*, Tianyi Wang*, Xun Qian, Pawan S. Rao, Manav Wadhawan, Ke Huo, Karthik Ramani
Proceedings of the 32nd Annual Symposium on User Interface Software and Technology. ACM, 2019.

We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user’s authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the...

read more