Three-dimensional (3D) reconstruction of unmanned aerial vehicle (UAV) images has garnered a lot of attention with the rapid growth of UAV technology. The crucial technique for large-scale 3D reconstruction of images taken from UAV is to get accurate and drift-free state estimation of UAV. Vision-based simultaneous localization and mapping (SLAM) system is widely used in this field because of the function of robot localization and unknown environment mapping. However, the current visual SLAM systems are vulnerable to issues including feature extraction challenge, mismatching, cumulative drift, and tracking lost in complicated and low-texture environment. To cope with the aforementioned challenges, we build a stereo vision SLAM framework based on ORB-SLAM, which fuses global positioning system (GPS) information and dense matching results based on deep learning to achieve accurate and drift-free pose estimation and 3D reconstruction of UAV images. We utilize cutting-edge method to create pixel-wise dense matching of adjacent frames and stereo frames and make full use of them. Then, to minimize the accumulated drifts, global GPS data are combined with pose graph optimization, which aligns local estimation with the global coordinate. Finally, the globally accurate and drift-free pose estimations are used to build a sparse point cloud map. Patch-based multi-view stereo algorithm is implemented to construct a dense map based on UAV images for reasonable results. We evaluate the effectiveness and accuracy of our approach in both simulation and real-world experiments. Extensive experimental results demonstrate that our method achieves great results. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Unmanned aerial vehicles
3D modeling
Cameras
Global Positioning System
Pose estimation
3D vision
Point clouds