TY - CHAP
T1 - MazeMind
T2 - Exploring the Effects of Hand Gestures and Eye Gazing on Cognitive Load and Task Efficiency in an Augmented Reality Environment
AU - Sun, Jiacheng
AU - Liao, Ting
N1 - Publisher Copyright:
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2025, corrected publication 2025.
PY - 2024/1/1
Y1 - 2024/1/1
N2 - This paper investigates the impact of hand gestures and eye gazing on cognitive load and task efficiency in Augmented Reality (AR) using Microsoft’s HoloLens 2 and the custom-developed application MazeMind. By conducting human-subject experiments in MazeMind, we assessed cognitive load using NASA-TLX and Galvanic Skin Response (GSR), confirming a significant correlation between real-time GSR readings and post-interaction NASA-TLX scores. This underscores GSR's potential for real-time cognitive load assessment in AR. Contrary to the expectations, our findings indicated that there was no significant difference between hand-gesture-based and eye-gaze-based interactions in terms of cognitive load and completion efficiency in AR activities, even for cognitively challenging tasks. This suggests the potential interchangeability of these interaction modalities in AR and provides evidence-based guidelines for designers. Our study contributes to a deeper understanding of multimodal interactions in AR and lays the foundation for future exploration of more complex AR interaction design strategies.
AB - This paper investigates the impact of hand gestures and eye gazing on cognitive load and task efficiency in Augmented Reality (AR) using Microsoft’s HoloLens 2 and the custom-developed application MazeMind. By conducting human-subject experiments in MazeMind, we assessed cognitive load using NASA-TLX and Galvanic Skin Response (GSR), confirming a significant correlation between real-time GSR readings and post-interaction NASA-TLX scores. This underscores GSR's potential for real-time cognitive load assessment in AR. Contrary to the expectations, our findings indicated that there was no significant difference between hand-gesture-based and eye-gaze-based interactions in terms of cognitive load and completion efficiency in AR activities, even for cognitively challenging tasks. This suggests the potential interchangeability of these interaction modalities in AR and provides evidence-based guidelines for designers. Our study contributes to a deeper understanding of multimodal interactions in AR and lays the foundation for future exploration of more complex AR interaction design strategies.
UR - http://www.scopus.com/inward/record.url?scp=105004161494&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105004161494&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-71922-6_7
DO - 10.1007/978-3-031-71922-6_7
M3 - Chapter
AN - SCOPUS:105004161494
SN - 9783031719219
VL - 2
SP - 105
EP - 120
BT - Design Computing and Cognition’24
ER -