MazeMind: Exploring the Effects of Hand Gestures and Eye Gazing on Cognitive Load and Task Efficiency in an Augmented Reality Environment

Jiacheng Sun, Ting Liao

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

This paper investigates the impact of hand gestures and eye gazing on cognitive load and task efficiency in Augmented Reality (AR) using Microsoft’s HoloLens 2 and the custom-developed application MazeMind. By conducting human-subject experiments in MazeMind, we assessed cognitive load using NASA-TLX and Galvanic Skin Response (GSR), confirming a significant correlation between real-time GSR readings and post-interaction NASA-TLX scores. This underscores GSR's potential for real-time cognitive load assessment in AR. Contrary to the expectations, our findings indicated that there was no significant difference between hand-gesture-based and eye-gaze-based interactions in terms of cognitive load and completion efficiency in AR activities, even for cognitively challenging tasks. This suggests the potential interchangeability of these interaction modalities in AR and provides evidence-based guidelines for designers. Our study contributes to a deeper understanding of multimodal interactions in AR and lays the foundation for future exploration of more complex AR interaction design strategies.

Original languageEnglish
Title of host publicationDesign Computing and Cognition’24
Subtitle of host publicationVolume 2
Pages105-120
Number of pages16
Volume2
ISBN (Electronic)9783031719226
DOIs
StatePublished - 1 Jan 2024

Fingerprint

Dive into the research topics of 'MazeMind: Exploring the Effects of Hand Gestures and Eye Gazing on Cognitive Load and Task Efficiency in an Augmented Reality Environment'. Together they form a unique fingerprint.

Cite this