weisongwen / researchTools

useful blogs for research
202 stars 41 forks source link

A tightly coupled monocular visual lidar odometry with loop closure #66

Open weisongwen opened 2 years ago

weisongwen commented 2 years ago

Simultaneous localization and mapping (SLAM) is a fundamental requirement for mobile robots like self-driving cars. Vision-based methods have advantages in sensor cost and loop closure detection, but are sensitive to illumination change and texture deficiency. Lidar-based SLAM systems perform better in accuracy, field-of-view and robustness to environmental changes, but may easily fail in structure-less scenarios. To compensate for the deficiencies of standalone sensors and provide more efficient SLAM functions, in this paper we propose a tightly coupled monocular visual lidar odometry, which tightly fuses the measurements of a monocular camera and a 3D lidar in a joint optimization. The system starts with a data preprocessing module, which outputs 3D visual and laser features through feature extraction and data association. Furthermore, the tightly coupled visual lidar odometry tightly fuses the visual and laser features in a unified optimization framework to estimate the transformation between consecutive scans. Finally, we combine visual and vicinity loop detection to construct loop constraints and optimize a 6-DOF global pose graph to achieve global-consistent pose estimation and environment mapping. The performance of our system is verified in the public KITTI dataset, and the experimental results demonstrate that the proposed method can run in real time with the 64-line lidar data and achieve better in accuracy, runtime and mapping against other state-of-the-art lidar-based and fusion-based methods.