TY - GEN
T1 - Robot-Supervised Learning for Object Segmentation
AU - Florence, Victoria
AU - Corso, Jason J.
AU - Griffin, Brent
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/5
Y1 - 2020/5
N2 - To be effective in unstructured and changing environments, robots must learn to recognize new objects. Deep learning has enabled rapid progress for object detection and segmentation in computer vision; however, this progress comes at the price of human annotators labeling many training examples. This paper addresses the problem of extending learning-based segmentation methods to robotics applications where annotated training data is not available. Our method enables pixelwise segmentation of grasped objects. We factor the problem of segmenting the object from the background into two sub-problems: (1) segmenting the robot manipulator and object from the background and (2) segmenting the object from the manipulator. We propose a kinematics-based foreground segmentation technique to solve (1). To solve (2), we train a self-recognition network that segments the robot manipulator. We train this network without human supervision, leveraging our foreground segmentation technique from (1) to label a training set of images containing the robot manipulator without a grasped object. We demonstrate experimentally that our method outperforms state-of-the-art adaptable in-hand object segmentation. We also show that a training set composed of automatically labelled images of grasped objects improves segmentation performance on a test set of images of the same objects in the environment.
AB - To be effective in unstructured and changing environments, robots must learn to recognize new objects. Deep learning has enabled rapid progress for object detection and segmentation in computer vision; however, this progress comes at the price of human annotators labeling many training examples. This paper addresses the problem of extending learning-based segmentation methods to robotics applications where annotated training data is not available. Our method enables pixelwise segmentation of grasped objects. We factor the problem of segmenting the object from the background into two sub-problems: (1) segmenting the robot manipulator and object from the background and (2) segmenting the object from the manipulator. We propose a kinematics-based foreground segmentation technique to solve (1). To solve (2), we train a self-recognition network that segments the robot manipulator. We train this network without human supervision, leveraging our foreground segmentation technique from (1) to label a training set of images containing the robot manipulator without a grasped object. We demonstrate experimentally that our method outperforms state-of-the-art adaptable in-hand object segmentation. We also show that a training set composed of automatically labelled images of grasped objects improves segmentation performance on a test set of images of the same objects in the environment.
UR - http://www.scopus.com/inward/record.url?scp=85092715857&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85092715857&partnerID=8YFLogxK
U2 - 10.1109/ICRA40945.2020.9196543
DO - 10.1109/ICRA40945.2020.9196543
M3 - Conference contribution
AN - SCOPUS:85092715857
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 1343
EP - 1349
BT - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
T2 - 2020 IEEE International Conference on Robotics and Automation, ICRA 2020
Y2 - 31 May 2020 through 31 August 2020
ER -