Tianyi Wang

Tianyi Wang

Tianyi Wang is a Ph.D. student in the School of Mechanical Engineering at Purdue University. Before joining the C Design Lab, Tianyi received his bachelor's degree from the Department of Precision Instrument in Tsinghua University, Beijing in 2016. Tianyi's current research interests focus on utilizing the technology of robotics, augmented reality as well as deep learning in the area of Human-Computer Interaction.
LightPaintAR: Assist Light Painting Photography with Augmented Reality

LightPaintAR: Assist Light Painting Photography with Augmented Reality

Tianyi Wang, Xun Qian, Fengming He, Karthik Ramani
In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems

Light painting photos are created by moving light sources in mid-air while taking a long exposure photo. However, it is challenging for novice users to leave accurate light traces without any spatial guidance. Therefore, we present LightPaintAR, a novel interface that leverages augmented reality (AR) traces as a spatial reference to enable precise movement of the light sources. LightPaintAR allows users to draft, edit, and adjust virtual light traces in AR, and move light sources along the AR traces to generate accurate light traces on photos. With LightPaintAR, users can light paint complex...

read more
AdapTutAR: An Adaptive Tutoring System for Machine Tasks in Augmented Reality

AdapTutAR: An Adaptive Tutoring System for Machine Tasks in Augmented Reality

Gaoping Huang*, Xun Qian*, Tianyi Wang, Fagun Patel, Maitreya Sreeram, Yuanzhi Cao, Karthik Ramani, and Alexander J. Quinn
In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems

Modern manufacturing processes are in a state of flux, as they adapt to increasing demand for flexible and self-configuring production. This poses challenges for training workers to rapidly master new machine operations and processes, i.e. machine tasks. Conventional in-person training is effective but requires time and effort of experts for each worker trained and not scalable. Recorded tutorials, such as video-based or augmented reality (AR), permit more efficient scaling. However, unlike in-person tutoring, existing recorded tutorials lack the ability to adapt to workers’ diverse...

read more
CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications

CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications

Tianyi Wang*, Xun Qian*, Fengming He, Xiyun Hu, Ke Huo, Yuanzhi Cao, Karthik Ramani
In Proceedings of the 2020 UIST 33rd ACM User Interface Software and Technology Symposium

Recognition of human behavior plays an important role in context-aware applications. However, it is still a challenge for end-users to build personalized applications that accurately recognize their own activities. Therefore, we present CAPturAR, an in-situ programming tool that supports users to rapidly author context-aware applications by referring to their previous activities. We customize an AR head-mounted device with multiple camera systems that allow for non-intrusive capturing of user's daily activities. During authoring, we reconstruct the captured data in AR with an animated avatar...

read more
An Exploratory Study of Augmented Reality Presence for Tutoring Machine Tasks

An Exploratory Study of Augmented Reality Presence for Tutoring Machine Tasks

Yuanzhi Cao, Xun Qian, Tianyi Wang, Rachel Lee, Ke Huo, Karthik Ramani
In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems

Machine tasks in workshops or factories are often a compound sequence of local, spatial, and body-coordinated human-machine interactions. Prior works have shown the merits of video-based and augmented reality (AR) tutoring systems for local tasks. However, due to the lack of a bodily representation of the tutor, they are not as effective for spatial and body-coordinated interactions. We propose avatars as an additional tutor representation to the existing AR instructions. In order to understand the design space of tutoring presence for machine tasks, we conduct a comparative study with 32...

read more
Autonomous Robotic Exploration and Mapping of Smart Indoor Environments With UWB-IoT Devices

Autonomous Robotic Exploration and Mapping of Smart Indoor Environments With UWB-IoT Devices

Tianyi Wang, Ke Huo, Muzhi Han, Daniel McArthur, Ze An, David Cappeleri, and Karthik Ramani
In Proceedings of AAAI Spring Symposium Series 2020

The emerging simultaneous localization and mapping (SLAM) techniques enable robots with the spatial awareness of the physical world. However, such awareness remains at a geometric level. We propose an approach for quickly constructing a smart environment with semantic labels to enhance the robot with spatial intelligence. Essentially, we embed UWB-based distance sensing IoT devices into regular items and treat the robot as a dynamic node in the IoT network. By leveraging the self-localization from the robot node, we resolve the locations of IoT devices in the SLAM map. We then exploit the...

read more
GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality

GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality

Yuanzhi Cao*, Tianyi Wang*, Xun Qian, Pawan S. Rao, Manav Wadhawan, Ke Huo, Karthik Ramani
Proceedings of the 32nd Annual Symposium on User Interface Software and Technology. ACM, 2019.

We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user’s authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the...

read more
SynchronizAR: Instant Synchronization for Spontaneous and Spatial Collaborations in Augmented Reality

SynchronizAR: Instant Synchronization for Spontaneous and Spatial Collaborations in Augmented Reality

Ke Huo, Tianyi Wang, Luis Paredes, Ana M Villanueva, Yuanzhi Cao, Karthik Ramani
In Proceedings of the 31th Annual Symposium on User Interface Software and Technology, UIST 2018, Oct. 14-17, Berlin, Germany.

We present SynchronizAR, an approach to spatially register multiple SLAM devices together without sharing maps or involving external tracking infrastructures. SynchronizAR employs a distance based indirect registration which resolves the transformations between the separate SLAM coordinate systems. We attach an Ultra-Wide Bandwidth (UWB) based distance measurements module on each of the mobile AR devices which is capable of self-localization with respect to the environment. As users move on independent paths, we collect the positions of the AR devices in their local frames and the...

read more
Plain2Fun Publication Receives an Honorable Mention at DIS (Designing Interactive Systems) Conference

Plain2Fun Publication Receives an Honorable Mention at DIS (Designing Interactive Systems) Conference

Tianyi Wang, Ke Huo, Pratik Chawla, Guiming Chen, Siddharth Banerjee, Karthik Ramani

Plain2Fun: Augmenting Ordinary Objects with Interactive Functions by Auto-Fabricating Surface Painted Circuits; receives an honorable mention among 400+ total submissions at the proceedings of the 2018 DIS Designing Interactive Systems Conference, June 9-13, 2018, Hong Kong, China. Download: plain2fun

read more
Plain2Fun Publication Receives an Honorable Mention at DIS (Designing Interactive Systems) Conference

Plain2Fun: Augmenting Ordinary Objects with Interactive Functions by Auto-Fabricating Surface Painted Circuits

Tianyi Wang, Ke Huo, Pratik Chawla, Guiming Chen, Siddharth Banerjee, Karthik Ramani
Proceedings of the 2018 DIS Designing Interactive Systems Conference, June 9-13, 2018, Hong Kong, China, with honourable mention.

The growing makers’ community demands better supports for designing and fabricating interactive functional objects. Most of the current approaches focus on embedding desired functions within new objects. Instead, we advocate repurposing the existing objects and rapidly authoring interactive functions onto them. We present Plain2Fun, a design and fabrication pipeline enabling users to quickly transform ordinary objects into interactive and functional ones. Plain2Fun allows users to directly design the circuit layouts onto the surfaces of the scanned 3D model of existing objects. Our design...

read more
Plain2Fun Publication Receives an Honorable Mention at DIS (Designing Interactive Systems) Conference

Plain2Fun: Augmenting Ordinary Objects with Interactive Functions by Auto-Fabricating Surface Painted Circuits

Tianyi Wang, Ke Huo, Partik Chawla, Guiming Chen, Siddharth Banerjee, Karthik Ramani
In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (p. LBW113). ACM.

The growing makers’ community demands better supports for designing and fabricating interactive functional objects. Most of the current approaches focus on embedding desired functions within new objects. Instead, we advocate re-purposing the existing objects and rapidly authoring interactive functions onto them. We present Plain2Fun, a design and fabrication pipeline enabling users to quickly transform ordinary objects into interactive and functional ones. Plain2Fun allows users to directly design the circuit layouts onto the surfaces of the scanned 3D model of existing objects. Our design...

read more