TY - GEN
T1 - Teaching a robot tasks of arbitrary complexity via human feedback
AU - Wang, Guan
AU - Trimbach, Carl
AU - Lee, Jun Ki
AU - Ho, Mark K.
AU - Littman, Michael L.
N1 - Publisher Copyright:
© 2020 Association for Computing Machinery.
PY - 2020/3/9
Y1 - 2020/3/9
N2 - This paper addresses the problem of training a robot to carry out temporal tasks of arbitrary complexity via evaluative human feedback that can be inaccurate. A key idea explored in our work is a kind of curriculum learning-training the robot to master simple tasks and then building up to more complex tasks. We show how a training procedure, using knowledge of the formal task representation, can decompose and train any task efficiently in the size of its representation.We further provide a set of experiments that support the claim that non-expert human trainers can decompose tasks in a way that is consistent with our theoretical results, with more than half of participants successfully training all of our experimental missions. We compared our algorithm with existing approaches and our experimental results suggest that our method outperforms alternatives, especially when feedback contains mistakes.
AB - This paper addresses the problem of training a robot to carry out temporal tasks of arbitrary complexity via evaluative human feedback that can be inaccurate. A key idea explored in our work is a kind of curriculum learning-training the robot to master simple tasks and then building up to more complex tasks. We show how a training procedure, using knowledge of the formal task representation, can decompose and train any task efficiently in the size of its representation.We further provide a set of experiments that support the claim that non-expert human trainers can decompose tasks in a way that is consistent with our theoretical results, with more than half of participants successfully training all of our experimental missions. We compared our algorithm with existing approaches and our experimental results suggest that our method outperforms alternatives, especially when feedback contains mistakes.
KW - Human-robot interaction
KW - Learning from human feedback
KW - Linear temporal logic
KW - Reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85082021557&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85082021557&partnerID=8YFLogxK
U2 - 10.1145/3319502.3374824
DO - 10.1145/3319502.3374824
M3 - Conference contribution
AN - SCOPUS:85082021557
T3 - ACM/IEEE International Conference on Human-Robot Interaction
SP - 649
EP - 657
BT - HRI 2020 - Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
T2 - 15th Annual ACM/IEEE International Conference on Human Robot Interaction, HRI 2020
Y2 - 23 March 2020 through 26 March 2020
ER -