TY - JOUR
T1 - VICs
T2 - A modular HCI framework using spatiotemporal dynamics
AU - Ye, Guangqi
AU - Corso, Jason J.
AU - Burschka, Darius
AU - Hager, Gregory D.
PY - 2004/12
Y1 - 2004/12
N2 - Many vision-based human-computer interaction systems are based on the tracking of user actions. Examples include gaze tracking, head tracking, finger tracking, etc. In this paper, we present a framework that employs no user tracking; instead, all interface components continuously observe and react to changes within a local neighborhood. More specifically, components expect a predefined sequence of visual events called visual interface cues (VICs). VICs include color, texture, motion, and geometric elements, arranged to maximize the veridicality of the resulting interface element. A component is executed when this stream of cues has been satisfied. We present a general architecture for an interface system operating under the VIC-based HCI paradigm and then focus specifically on an appearance-based system in which a hidden Markov model (HMM) is employed to learn the gesture dynamics. Our implementation of the system successfully recognizes a button push with a 96% success rate.
AB - Many vision-based human-computer interaction systems are based on the tracking of user actions. Examples include gaze tracking, head tracking, finger tracking, etc. In this paper, we present a framework that employs no user tracking; instead, all interface components continuously observe and react to changes within a local neighborhood. More specifically, components expect a predefined sequence of visual events called visual interface cues (VICs). VICs include color, texture, motion, and geometric elements, arranged to maximize the veridicality of the resulting interface element. A component is executed when this stream of cues has been satisfied. We present a general architecture for an interface system operating under the VIC-based HCI paradigm and then focus specifically on an appearance-based system in which a hidden Markov model (HMM) is employed to learn the gesture dynamics. Our implementation of the system successfully recognizes a button push with a 96% success rate.
KW - Gesture recognition
KW - Human-computer interaction
KW - Vision-based interaction
UR - http://www.scopus.com/inward/record.url?scp=10844286441&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=10844286441&partnerID=8YFLogxK
U2 - 10.1007/s00138-004-0159-0
DO - 10.1007/s00138-004-0159-0
M3 - Article
AN - SCOPUS:10844286441
SN - 0932-8092
VL - 16
SP - 13
EP - 20
JO - Machine Vision and Applications
JF - Machine Vision and Applications
IS - 1
ER -