GesPrompt: Leveraging Co-Speech Gestures to Augment LLM-Based Interaction in Virtual Reality

GesPrompt: Leveraging Co-Speech Gestures to Augment LLM-Based Interaction in Virtual Reality

Large Language Model (LLM)-based copilots have shown great potential in Extended Reality (XR) applications. However, the user faces challenges when describing the 3D environments to the copilots due to the complexity of conveying spatial-temporal information through...
CARING-AI: Towards Authoring Context-aware Augmented Reality INstruction through Generative Artificial Intelligence

CARING-AI: Towards Authoring Context-aware Augmented Reality INstruction through Generative Artificial Intelligence

Context-aware AR instruction enables adaptive and in-situ learning experiences. However, hardware limitations and expertise requirements constrain the creation of such instructions. With recent developments in Generative Artificial Intelligence (Gen-AI), current...
Visualizing Causality in Mixed Reality for Manual Task Learning: A Study

Visualizing Causality in Mixed Reality for Manual Task Learning: A Study

Mixed Reality (MR) is gaining prominence in manual task skill learning due to its in-situ, embodied, and immersive experience. To teach manual tasks, current methodologies break the task into hierarchies (tasks into subtasks) and visualize not only the current...