Computer vision (CV) algorithms require large annotated datasets that are often labor-intensive and expensive to create. We propose AnnotateXR, an extended reality (XR) workflow to collect various high-fidelity data and auto-annotate it in a single...
Estimating Ego-Body Pose from Doubly Sparse Egocentric Video Data, Advances in Neural Information Processing Systems
We study the problem of estimating the body movements of a camera wearer from egocentric videos. Current methods for ego-body pose estimation rely on temporally dense sensor data, such as IMU measurements from spatially sparse body parts like the...
M2D2M: Multi-Motion Generation from Text with Discrete Diffusion Models
We introduce the Multi-Motion Discrete Diffusion Models (M2D2M), a novel approach for human motion generation from textual descriptions of multiple actions, utilizing the strengths of discrete diffusion models. This approach adeptly addresses the...
InfoGCN++: Learning Representation by Predicting the Future for Online Human Skeleton-based Action Recognition
Skeleton-based action recognition has made significant advancements recently, with models like InfoGCN showcasing remarkable accuracy. However, these models exhibit a key limitation: they necessitate complete action observation prior to...
avaTTAR: Table Tennis Stroke Training with On-body and Detached Visualization in Augmented Reality
Table tennis stroke training is a critical aspect of player development. We designed a new augmented reality (AR) system, avaTTAR, for table tennis stroke training. The system provides both "on-body" (first-person view) and "detached" (third-person...
ClassMeta: Designing Interactive Virtual Classmate to Promote VR Classroom Participation
Peer influence plays a crucial role in promoting classroom participation, where behaviors from active students can contribute to a collective classroom learning experience. However, the presence of these active students depends on several...
AircraftVerse: A Large-Scale Multimodal Dataset of Aerial Vehicle Designs
We present AircraftVerse, a publicly available aerial vehicle design dataset. Aircraft design encompasses different physics domains and, hence, multiple modalities of representation. The evaluation of these cyber-physical system (CPS) designs...
Interacting Objects: A dataset of object-object interactions for richer dynamic scene representations
Interacting Objects: A dataset of object-object interactions for richer dynamic scene representations Asim Unmesh, Rahul Jain, Jingyu Shi, V. K. Chaithanya Manam, Hyung-Gun Chi, Subramanian Chidambaram, Alexander J. Quinn, Karthik Ramani IEEE...
An HCI-Centric Survey and Taxonomy of Human-Generative-AI Interactions
Deep Ritz Method with Adaptive Quadrature for Linear Elasticity
In this paper, we study the deep Ritz method for solving the linear elasticity equation from a numerical analysis perspective. A modified Ritz formulation using the H1/2(ΓD) norm is introduced and analyzed for linear elasticity equation in order to...
AdamsFormer for Spatial Action Localization in the Future
Predicting future action locations is vital for applications like human-robot collaboration. While some computer vision tasks have made progress in predicting human actions, accurately localizing these actions in future frames remains an area with...
ImpersonatAR: Using Embodied Authoring and Evaluation to Prototype Multi-Scenario Use cases for Augmented Reality Applications
The Design of a Virtual Prototyping System for Authoring Interactive Virtual Reality Environments From Real-World Scans
Ubi-TOUCH: Ubiquitous Tangible Object Utilization through Consistent Hand-object interaction in Augmented Reality
Utilizing everyday objects as tangible proxies for Augmented Reality (AR) provides users with haptic feedback while interacting with virtual objects. Yet, existing methods focus on the attributes of the objects, constraining the possible proxies...
Simplification of 3D CAD Model in Voxel Form for Mechanical Parts Using Generative Adversarial Networks
Ubi Edge: Authoring Edge-Based Opportunistic Tangible User Interfaces in Augmented Reality
Edges are one of the most ubiquitous geometric features of physical objects. They provide accurate haptic feedback and easy-totrack features for camera systems, making them an ideal basis for Tangible User Interfaces (TUI) in Augmented Reality...
Pose Relation Transformer: Refine Occlusions for Human Pose Estimation
Accurately estimating the human pose is an essential task for many applications in robotics. However, existing pose estimation methods suffer from poor performance when occlusion occurs. Recent advances in NLP have been very successful in...
InstruMentAR: Auto-Generation of Augmented Reality Tutorials for Operating Digital Instruments Through Recording Embodied
Augmented Reality tutorials, which provide necessary context by directly superimposing visual guidance on the physical referent, represent an effective way of scaffolding complex instrument operations. However, current AR tutorial authoring...
LearnIoTVR: An End-to-End Virtual Reality Environment Providing Authentic Learning Experiences for Internet of Things
The rapid growth of Internet-of-Things (IoT) applications has generated interest from many industries and a need for graduates with relevant knowledge. An IoT system is comprised of spatially distributed interactions between humans and various...
Advanced modeling method for quantifying cumulative subjective fatigue in mid-air interaction
Interaction in mid-air can be fatiguing. A model-based method to quantify cumulative subjective fatigue for such interaction was recently introduced in HCI research. This model separates muscle units into three states: active (MA) fatigued (MF) or...
MechARspace: An Authoring System Enabling Bidirectional Binding of Augmented Reality with Toys in Real-time
Augmented Reality (AR), which blends physical and virtual worlds, presents the possibility of enhancing traditional toy design. By leveraging bidirectional virtual-physical interactions between humans and the designed artifact, such AR-enhanced...
ARnnotate: An Augmented Reality Interface for Collecting Custom Dataset of 3D Hand-Object Interaction Pose Estimation
Vision-based 3D pose estimation has substantial potential in hand-object interaction applications and requires user-specified datasets to achieve robust performance. We propose ARnnotate, an Augmented Reality (AR) interface enabling end-users to...
EditAR: A Digital Twin Authoring Environment for Creation of AR/VR and Video Instructions from a Single Demonstration
Augmented/Virtual reality and video-based media play a vital role in the digital learning revolution to train novices in spatial tasks. However, creating content for these different media requires expertise in several fields. We present EditAR, a...
Advanced modeling method for quantifying cumulative subjective fatigue in mid-air interaction
Interaction in mid-air can be fatiguing. A model-based method to quantify cumulative subjective fatigue for such interaction was recently introduced in HCI research. This model separates muscle units into three states: active (Ma) fatigued (Mf) or...
StretchAR: Exploiting Touch and Stretch as a Method of Interaction for Smart Glasses Using Wearable Straps
Over the past decade, augmented reality (AR) developers have explored a variety of approaches to allow users to interact with the information displayed on smart glasses and head-mounted displays (HMDs). Current interaction modalities such as...
ScalAR: Authoring Semantically Adaptive Augmented Reality Experiences in Virtual Reality
Augmented Reality (AR) experiences tightly associate virtual contents with environmental entities. However, the dissimilarity of different environments limits the adaptive AR content behaviors under large-scale deployment. We propose ScalAR, an...
Towards Modeling of Virtual Reality Welding Simulators to Promote Accessible and Scalable Training
The US manufacturing industry is currently facing a welding workforce shortage which is largely due to inadequacy of widespread welding training. To address this challenge, we present a Virtual Reality (VR)-based training system aimed at...
InfoGCN: Representation Learning for Human Skeleton-based Action Recognition
Human skeleton-based action recognition offers a valuable means to understand the intricacies of human behavior because it can handle the complex relationships between physical constraints and intention. Although several studies have focused on...
ColabAR: A Toolkit for Remote Collaboration in Tangible Augmented Reality Laboratories
Current times are accelerating new technologies to provide high-quality education for remote collaboration, as well as hands-on learning. This is particularly important in the case of laboratory-based classes, which play an essential role in STEM...
CHIMERA: Supporting Wearables Development across Multidisciplinary Perspectives
Wearable technologies draw on a range of disciplines, including fashion, textiles, HCI, and engineering. Due to differences in methodology, wearables researchers can experience gaps or breakdowns in values, goals, and vocabulary when collaborating....
GesturAR: An Authoring System for Creating Freehand Interactive Augmented Reality Applications
Freehand gesture is an essential input modality for modern Augmented Reality (AR) user experiences. However, developing AR applications with customized hand interactions remains a challenge for end-users. Therefore, we propose GesturAR, an...
Towards a Comprehensive and Robust Micromanipulation System with Force-Sensing and VR Capabilities
ProcessAR: An augmented reality-based tool to create in-situ procedural 2D/3D AR Instructions
Augmented reality (AR) is an efficient form of delivering spatial information and has great potential for training workers. However, AR is still not widely used for such scenarios due to the technical skills and expertise required to create...
FabHandWear : An End-to-End Pipeline from Design to Fabrication of Customized Functional Hand Wearables
Current hand wearables have limited customizability, they are loose-fit to an individual's hand and lack comfort. The main barrier in customizing hand wearables is the geometric complexity and size variation in hands. Moreover, there are different...
Towards modeling of human skilling for electrical circuitry using augmented reality applications
Augmented reality (AR) is a unique, hands-on tool to deliver information. However, its educational value has been mainly demonstrated empirically so far. In this paper, we present a modeling approach to provide users with mastery of a skill, using...
RobotAR: An Augmented Reality Compatible Teleconsulting Robotics Toolkit for Augmented Makerspace Experiences
Distance learning is facing a critical moment finding a balance between high quality education for remote students and engaging them in hands-on learning. This is particularly relevant for project-based classrooms and makerspaces, which typically...
LightPaintAR: Assist Light Painting Photography with Augmented Reality
Light painting photos are created by moving light sources in mid-air while taking a long exposure photo. However, it is challenging for novice users to leave accurate light traces without any spatial guidance. Therefore, we present LightPaintAR, a...
AdapTutAR: An Adaptive Tutoring System for Machine Tasks in Augmented Reality
Modern manufacturing processes are in a state of flux, as they adapt to increasing demand for flexible and self-configuring production. This poses challenges for training workers to rapidly master new machine operations and processes, i.e. machine...
VRFromX: From Scanned Reality to Interactive Virtual Experience with Human-in-the-Loop
There is an increasing trend of Virtual-Reality (VR) applications found in education, entertainment, and industry. Many of them utilize real world tools, environments, and interactions as bases for creation. However, creating such applications is...
Object Synthesis by Learning Part Geometry with Surface and Volumetric Representations
First-Person View Hand Segmentation of Multi-Modal Hand Activity Video Dataset
Abstract: First-person-view videos of hands interacting with tools are widely used in the computer vision industry. However, creating a dataset with pixel-wise segmentation of hands is challenging since most videos are captured with fingertips...
A Large-scale Annotated Mechanical Components Benchmark for Classification and Retrieval Tasks with Deep Neural Networks
We introduce a large-scale annotated mechanical components benchmark for classification and retrieval tasks named Mechanical Components Benchmark (MCB): a large-scale dataset of 3D objects of mechanical components. The dataset enables data-driven...
CAPturAR: An Augmented Reality Tool for Authoring Human-Involved Context-Aware Applications
Recognition of human behavior plays an important role in context-aware applications. However, it is still a challenge for end-users to build personalized applications that accurately recognize their own activities. Therefore, we present CAPturAR,...
StoryMakAR: Bringing Stories to Life with an Augmented Reality & Physical Prototyping Toolkit for Youth
Makerspaces can support educational experiences in prototyping for children. Storytelling platforms enable high levels of creativity and expression, but have high barriers of entry. We introduce StoryMakAR, which combines making and storytelling....
Vipo: Spatial-Visual Programming with Functions for Robot-IoT Workflows
Mobile robots and IoT (Internet of Things) devices can increase productivity, but only if they can be programmed by workers who understand the domain. This is especially true in manufacturing. Visual programming in the spatial context of the...
An Exploratory Study of Augmented Reality Presence for Tutoring Machine Tasks
Machine tasks in workshops or factories are often a compound sequence of local, spatial, and body-coordinated human-machine interactions. Prior works have shown the merits of video-based and augmented reality (AR) tutoring systems for local tasks....
Meta-AR-App: An Authoring Platform for Collaborative Augmented Reality in STEM Classrooms
Augmented Reality (AR) has become a valuable tool for education and training processes. Meanwhile, cloud-based technologies can foster collaboration and other interaction modalities to enhance learning. We combine the cloud capabilities with AR...
Autonomous Robotic Exploration and Mapping of Smart Indoor Environments With UWB-IoT Devices
The emerging simultaneous localization and mapping (SLAM) techniques enable robots with the spatial awareness of the physical world. However, such awareness remains at a geometric level. We propose an approach for quickly constructing a smart...
Using Social Interaction Traced Data and Context to Predict Collaboration Quality and Creative Fluency in Collaborative Design Learning Environments
Engineering design typically occurs as a collaborative process situated in specific context such as computer-supported environments, however there is limited research examining the dynamics of design collaboration in specific contexts. In this...