My research explores how multimodal interaction can support spatial thinking in immersive environments. I design and evaluate interfaces that use hand, voice, and gaze input to help users plan, visualize, and reason about 3D tasks in Virtual Reality.

Spatial Hand Actions (CHI 2025)

This project investigates how users naturally use hand actions to express spatial thinking during 3D assembly tasks in VR.

  • Problem: Existing VR interfaces often capture input without revealing how users reason spatially.
  • System: An immersive study of natural hand actions during 3D assembly tasks.
  • Contribution: Identified action patterns that can inform the design of multimodal interfaces.
PaperProject

Spatial Guides

This project explores how lightweight visual guides can support orientation, visualization, and relational reasoning during immersive 3D assembly.

  • Problem: Users often struggle with planning and alignment during spatial assembly in VR.
  • System: Minimal visual guides triggered through multimodal interaction.
  • Contribution: A guidance approach aimed at supporting planning and reasoning, not just faster completion.

Project


Gaze-Supported Planning

This ongoing work investigates gaze as a mechanism for planning and wayfinding in immersive spatial tasks.

  • Problem: Planning in immersive assembly tasks is often under-supported before action begins.
  • System: Gaze-based cues for pre-action planning and spatial decision-making.
  • Contribution: Extends multimodal interaction beyond hand and voice toward planning-aware interfaces.

Project


Additional Research Experience

I have also contributed to research on VR authentication using eye-gaze and immersive interaction systems, including work published in IEEE VRW and ACM SUI.