Paper
9 January 2025 Robust RGB-D visual odometry with adaptive selection of point, line, and plane features
Jing Xing, Jian Liu
Author Affiliations +
Proceedings Volume 13486, Fourth International Conference on Computer Vision, Application, and Algorithm (CVAA 2024); 134860O (2025) https://doi.org/10.1117/12.3055728
Event: Fourth International Conference on Computer Vision, Application, and Algorithm (CVAA 2024), 2024, Chengdu, China
Abstract
To deal with low-texture scenes, many visual simultaneous localization and mapping (SLAM) methods have introduced line features and plane features to provide additional structural information for stable frame tracking. These methods typically use a certain type of feature or feature combination, such as point-line features or point-line-plane features, throughout the entire camera pose estimation process. However simply increasing the number of feature types will introduce more noise sources, reducing the accuracy of camera pose estimation, thereby decreasing the localization accuracy. To solve this problem, this paper proposes an RGB-D visual odometry method that can automatically select different feature modes. It adaptively selects the type of features for frame-to-frame tracking based on the numbers of features extracted from current frames. The proposed method is evaluated on the TUM-RGBD dataset and achieves better trajectory accuracy than other algorithms.
(2025) Published by SPIE. Downloading of the abstract is permitted for personal use only.
Jing Xing and Jian Liu "Robust RGB-D visual odometry with adaptive selection of point, line, and plane features", Proc. SPIE 13486, Fourth International Conference on Computer Vision, Application, and Algorithm (CVAA 2024), 134860O (9 January 2025); https://doi.org/10.1117/12.3055728
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Feature extraction

Cameras

Pose estimation

Feature fusion

Feature selection

Back to Top