WHU-USI3DV / CoFiI2P

[IEEE RA-L 2024 & ICRA'25] CoFiI2P: Coarse-to-Fine Correspondences-Based Image-to-Point Cloud Registration
https://whu-usi3dv.github.io/CoFiI2P/
91 stars 5 forks source link
image-to-point-cloud-registration

CoFiI2P: Coarse-to-Fine Correspondences-Based Image-to-Point Cloud Registration

This is the official PyTorch implementation of the following publication:

CoFiI2P: Coarse-to-Fine Correspondences-Based Image-to-Point Cloud Registration
Shuhao Kang*, Youqi Liao*, , Jianping Li, FuxunLiang, Yuhao Li, Xianghong Zou, Fangning Li, Xieyuanli Chen, Zhen Dong, Bisheng Yang
IEEE RA-L 2024
Paper | Arxiv | Project-page | Video

πŸ”­ Introduction

TL;DR: CoFiI2P is a coarse-to-fine framework for image-to-point cloud registration task.

Motivation

Abstract: β€”Image-to-point cloud (I2P) registration is a fundamental task for robots and autonomous vehicles to achieve crossmodality data fusion and localization. Current I2P registration methods primarily focus on estimating correspondences at the point or pixel level, often neglecting global alignment. As a result, I2P matching can easily converge to a local optimum if it lacks high-level guidance from global constraints. To improve the success rate and general robustness, this paper introduces CoFiI2P, a novel I2P registration network that extracts correspondences in a coarse-to-fine manner. First, the image and point cloud data are processed through a two-stream encoder-decoder network for hierarchical feature extraction. Second, a coarseto-fine matching module is designed to leverage these features and establish robust feature correspondences. Specifically, In the coarse matching phase, a novel I2P transformer module is employed to capture both homogeneous and heterogeneous global information from the image and point cloud data. This enables the estimation of coarse super-point/super-pixel matching pairs with discriminative descriptors. In the fine matching module, point/pixel pairs are established with the guidance of superpoint/super-pixel correspondences. Finally, based on matching pairs, the transform matrix is estimated with the EPnP-RANSAC algorithm. Experiments conducted on the KITTI Odometry dataset demonstrate that CoFiI2P achieves impressive results, with a relative rotation error (RRE) of 1.14 degrees and a relative translation error (RTE) of 0.29 meters, while maintaining realtime speed. These results represent a significant improvement of 84% in RRE and 89% in RTE compared to the current state-ofthe-art (SOTA) method. Additional experiments on the Nuscenes datasets confirm our method’s generalizability.

πŸ†• News

πŸ’» Installation

An example for CUDA=11.6 and pytorch=1.13.1:

pip3 install fvcore
pip3 install open3d==0.17.0
pip3 install opencv-python
pip3 install torchvision=0.14.1

We will provide a Docker image for quick start.

πŸš… Usage

KITTI data preprocessing

You could download the processed data here or process it from source. For more details, please refer to CorrI2P.

Nuscenes data preprocessing

Due to the extremely large scale of processed data (200G approximately), we only provide the data pre-processing code now. Please download the source data here and refer to following steps for building image-to-point cloud registration data:

Evaluation

For the KITTI Odometry dataset and Nuscenes dataset, we provide pre-trained models on OneDrive and Baidu Disk. Please download the weights of CoFiI2P from webdrive and put them in a folder like ckpt/. Example: evaluate CoFiI2P on the KITTI Odometry dataset

python -m evaluation.eval_all ./checkpoints/cofii2p_kitti.t7 kitti

Above operation calculates the per-frame registration error and save intermediate results.

Then:

python -m evaluation.calc_result

Training

Example: train CoFiI2P on the KITTI Odometry dataset

python -m train kitti

πŸ’‘ Citation

If you find this repo helpful, please give us a star~.Please consider citing Mobile-Seed if this program benefits your project.

@article{kang2023cofii2p,
  title={CoFiI2P: Coarse-to-Fine Correspondences-Based Image-to-Point Cloud Registration},
  author={Shuhao Kang and Youqi Liao and Jianping Li and Fuxun Liang and Yuhao Li and Xianghong Zou and Fangning Li and Xieyuanli Chen and Zhen Dong and Bisheng Yang3},
  year={2023},
    eprint={2309.14660},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}

πŸ”— Related Projects

We sincerely thank the excellent projects: