TY - GEN
T1 - Learning to navigate robotic wheelchairs from demonstration
T2 - 17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019
AU - Kutbi, Mohammed
AU - Chang, Yizhe
AU - Sun, Bo
AU - Mordohai, Philippos
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/10
Y1 - 2019/10
N2 - Learning from demonstration (LfD) enables robots to learn complex relationships between their state, perception and actions that are hard to express in an optimization framework. While people intuitively know what they would like to do in a given situation, they often have difficulty representing their decision process precisely enough to enable an implementation. Here, we are interested in robots that carry passengers, such as robotic wheelchairs, where user preferences, comfort and the feeling of safety are important for autonomous navigation. Balancing these requirements is not straightforward. While robots can be trained in an LfD framework in which users drive the robot according to their preferences, performing these demonstrations can be time-consuming, expensive, and possibly dangerous. Inspired by recent efforts for generating synthetic data for training autonomous driving systems, we investigate whether it is possible to train a robot based on simulations to reduce the time requirements, cost and potential risk. A key characteristic of our approach is that the input is not images, but the locations of people and obstacles relative to the robot. We argue that this allows us to transfer the classifier from the simulator to the physical world and to previously unseen environments that do not match the appearance of the training set. Experiments with 14 subjects providing physical and simulated demonstrations validate our claim.
AB - Learning from demonstration (LfD) enables robots to learn complex relationships between their state, perception and actions that are hard to express in an optimization framework. While people intuitively know what they would like to do in a given situation, they often have difficulty representing their decision process precisely enough to enable an implementation. Here, we are interested in robots that carry passengers, such as robotic wheelchairs, where user preferences, comfort and the feeling of safety are important for autonomous navigation. Balancing these requirements is not straightforward. While robots can be trained in an LfD framework in which users drive the robot according to their preferences, performing these demonstrations can be time-consuming, expensive, and possibly dangerous. Inspired by recent efforts for generating synthetic data for training autonomous driving systems, we investigate whether it is possible to train a robot based on simulations to reduce the time requirements, cost and potential risk. A key characteristic of our approach is that the input is not images, but the locations of people and obstacles relative to the robot. We argue that this allows us to transfer the classifier from the simulator to the physical world and to previously unseen environments that do not match the appearance of the training set. Experiments with 14 subjects providing physical and simulated demonstrations validate our claim.
KW - Assistive robotics
KW - Learning from demonstration
KW - Learning in simulation
KW - Robotic wheelchair
UR - http://www.scopus.com/inward/record.url?scp=85082487125&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082487125&partnerID=8YFLogxK
U2 - 10.1109/ICCVW.2019.00309
DO - 10.1109/ICCVW.2019.00309
M3 - Conference contribution
AN - SCOPUS:85082487125
T3 - Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019
SP - 2522
EP - 2531
BT - Proceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019
Y2 - 27 October 2019 through 28 October 2019
ER -