TY - GEN
T1 - 3-D Reconstruction Using Monocular Camera and Lights
T2 - 2023 IEEE International Conference on Robotics and Automation, ICRA 2023
AU - Roznere, Monika
AU - Mordohai, Philippos
AU - Rekleitis, Ioannis
AU - Li, Alberto Quattrini
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - This paper proposes a novel underwater Multi-View Photometric Stereo (MVPS) framework for reconstructing scenes in 3-D with a non-stationary low-cost robot equipped with a monocular camera and fixed lights. The underwater realm is the primary focus of study here, due to the challenges in utilizing underwater camera imagery and lack of low-cost reliable localization systems. Previous underwater PS approaches provided accurate scene reconstruction results, but assumed that the robot was stationary at the bottom. This assumption is limiting, as many artifacts, reefs, and man-made structures are large and meters above the bottom. Our proposed MVPS framework relaxes the stationarity assumption by utilizing a monocular SLAM system to estimate small robot motions and extract an initial sparse feature map. To compensate for the scale inconsistency in monocular SLAM output, our MVPS optimization scheme collectively estimates a high-quality, dense 3-D reconstruction and corrects the camera pose estimates. We also present an attenuation and camera-light extrinsic parameter calibration method for non-stationary robots. Finally, validation experiments with a BlueROV2 demonstrated the low-cost capability of producing high-quality scene reconstructions. Overall, this work is the foundation of an active perception pipeline for robots (i.e., underwater, ground, and aerial) to explore and map complex structures in high accuracy and resolution with an inexpensive sensor-light configuration.
AB - This paper proposes a novel underwater Multi-View Photometric Stereo (MVPS) framework for reconstructing scenes in 3-D with a non-stationary low-cost robot equipped with a monocular camera and fixed lights. The underwater realm is the primary focus of study here, due to the challenges in utilizing underwater camera imagery and lack of low-cost reliable localization systems. Previous underwater PS approaches provided accurate scene reconstruction results, but assumed that the robot was stationary at the bottom. This assumption is limiting, as many artifacts, reefs, and man-made structures are large and meters above the bottom. Our proposed MVPS framework relaxes the stationarity assumption by utilizing a monocular SLAM system to estimate small robot motions and extract an initial sparse feature map. To compensate for the scale inconsistency in monocular SLAM output, our MVPS optimization scheme collectively estimates a high-quality, dense 3-D reconstruction and corrects the camera pose estimates. We also present an attenuation and camera-light extrinsic parameter calibration method for non-stationary robots. Finally, validation experiments with a BlueROV2 demonstrated the low-cost capability of producing high-quality scene reconstructions. Overall, this work is the foundation of an active perception pipeline for robots (i.e., underwater, ground, and aerial) to explore and map complex structures in high accuracy and resolution with an inexpensive sensor-light configuration.
UR - http://www.scopus.com/inward/record.url?scp=85168710703&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85168710703&partnerID=8YFLogxK
U2 - 10.1109/ICRA48891.2023.10160459
DO - 10.1109/ICRA48891.2023.10160459
M3 - Conference contribution
AN - SCOPUS:85168710703
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 1026
EP - 1032
BT - Proceedings - ICRA 2023
Y2 - 29 May 2023 through 2 June 2023
ER -