TY - GEN
T1 - Development of vision-aided navigation for a wearable outdoor augmented reality system
AU - Menozzi, Alberico
AU - Clipp, Brian
AU - Wenger, Eric
AU - Heinly, Jared
AU - Dunn, Enrique
AU - Towles, Herman
AU - Frahm, Jan Michael
AU - Welch, Gregory
PY - 2014
Y1 - 2014
N2 - This paper describes the development of vision-aided navigation (i.e., pose estimation) for a wearable augmented reality system operating in natural outdoor environments. This system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered graphics on the user's view of reality. Accurate pose estimation is achieved through integration of inertial, magnetic, GPS, terrain elevation data, and computervision inputs. Specifically, a helmet-mounted forward-looking camera and custom computer vision algorithms are used to provide measurements of absolute orientation (i.e., orientation of the helmet with respect to the Earth). These orientation measurements, which leverage mountainous terrain horizon geometry and/or known landmarks, enable the system to achieve significant improvements in accuracy compared to GPS/INS solutions of similar size, weight, and power, and to operate robustly in the presence of magnetic disturbances. Recent field testing activities, across a variety of environments where these vision-based signals of opportunity are available, indicate that high accuracy (less than 10 mrad) in graphics geo-registration can be achieved. This paper presents the pose estimation process, the methods behind the generation of vision-based measurements, and representative experimental results.
AB - This paper describes the development of vision-aided navigation (i.e., pose estimation) for a wearable augmented reality system operating in natural outdoor environments. This system combines a novel pose estimation capability, a helmet-mounted see-through display, and a wearable processing unit to accurately overlay geo-registered graphics on the user's view of reality. Accurate pose estimation is achieved through integration of inertial, magnetic, GPS, terrain elevation data, and computervision inputs. Specifically, a helmet-mounted forward-looking camera and custom computer vision algorithms are used to provide measurements of absolute orientation (i.e., orientation of the helmet with respect to the Earth). These orientation measurements, which leverage mountainous terrain horizon geometry and/or known landmarks, enable the system to achieve significant improvements in accuracy compared to GPS/INS solutions of similar size, weight, and power, and to operate robustly in the presence of magnetic disturbances. Recent field testing activities, across a variety of environments where these vision-based signals of opportunity are available, indicate that high accuracy (less than 10 mrad) in graphics geo-registration can be achieved. This paper presents the pose estimation process, the methods behind the generation of vision-based measurements, and representative experimental results.
KW - augmented reality
KW - computer vision
KW - inertial navigation
UR - http://www.scopus.com/inward/record.url?scp=84905014032&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84905014032&partnerID=8YFLogxK
U2 - 10.1109/PLANS.2014.6851442
DO - 10.1109/PLANS.2014.6851442
M3 - Conference contribution
AN - SCOPUS:84905014032
SN - 9781479933204
T3 - Record - IEEE PLANS, Position Location and Navigation Symposium
SP - 760
EP - 772
BT - Proceedings of the 2014 IEEE/ION Position, Location and Navigation Symposium, PLANS 2014
T2 - 2014 IEEE/ION Position, Location and Navigation Symposium, PLANS 2014
Y2 - 5 May 2014 through 8 May 2014
ER -