prs-eth / OverlapPredator

[CVPR 2021, Oral] PREDATOR: Registration of 3D Point Clouds with Low Overlap.
https://shengyuh.github.io/predator/index.html
MIT License
512 stars 73 forks source link
3dvision attention-mechanism point-cloud registration transformer

PREDATOR: Registration of 3D Point Clouds with Low Overlap (CVPR 2021, Oral)

This repository represents the official implementation of the paper:

PREDATOR: Registration of 3D Point Clouds with Low Overlap

*Shengyu Huang, *Zan Gojcic, Mikhail Usvyatsov, Andreas Wieser, Konrad Schindler\ |ETH Zurich | * Equal contribution

For implementation using MinkowskiEngine backbone, please check this

For more information, please see the project website

Predator_teaser

Contact

If you have any questions, please let us know:

News

Instructions

This code has been tested on

Note: We observe random data loader crashes due to memory issues, if you observe similar issues, please consider reducing the number of workers or increasing CPU RAM. We now released a sparse convolution-based Predator, have a look here!

Requirements

To create a virtual environment and install the required dependences please run:

git clone https://github.com/overlappredator/OverlapPredator.git
virtualenv predator; source predator/bin/activate
cd OverlapPredator; pip install -r requirements.txt
cd cpp_wrappers; sh compile_wrappers.sh; cd ..

in your working folder.

Datasets and pretrained models

For KITTI dataset, please follow the instruction on KITTI Odometry website to download the KITTI odometry training set.

We provide

The preprocessed data and models can be downloaded by running:

sh scripts/download_data_weight.sh

To download raw dense 3DMatch data, please run:

wget --no-check-certificate --show-progress https://share.phys.ethz.ch/~gsg/pairwise_reg/3dmatch.zip
unzip 3dmatch.zip

The folder is organised as follows:

3DMatch(Indoor)

Train

After creating the virtual environment and downloading the datasets, Predator can be trained using:

python main.py configs/train/indoor.yaml

Evaluate

For 3DMatch, to reproduce Table 2 in our main paper, we first extract features and overlap/matachability scores by running:

python main.py configs/test/indoor.yaml

the features together with scores will be saved to snapshot/indoor/3DMatch. The estimation of the transformation parameters using RANSAC can then be carried out using:

for N_POINTS in 250 500 1000 2500 5000
do
  python scripts/evaluate_predator.py --source_path snapshot/indoor/3DMatch --n_points $N_POINTS --benchmark 3DMatch --exp_dir snapshot/indoor/est_traj --sampling prob
done

dependent on n_points used by RANSAC, this might take a few minutes. The final results are stored in snapshot/indoor/est_traj/{benchmark}_{n_points}_prob/result. To evaluate PREDATOR on 3DLoMatch benchmark, please also change 3DMatch to 3DLoMatch in configs/test/indoor.yaml.

Demo

We prepared a small demo, which demonstrates the whole Predator pipeline using two random fragments from the 3DMatch dataset. To carry out the demo, please run:

python scripts/demo.py configs/test/indoor.yaml

The demo script will visualize input point clouds, inferred overlap regions, and point cloud aligned with the estimated transformation parameters:

demo

ModelNet(Synthetic)

Train

To train PREDATOR on ModelNet, please run:

python main.py configs/train/modelnet.yaml

We provide a small script to evaluate Predator on ModelNet test set, please run:

python main.py configs/test/modelnet.yaml

The rotation and translation errors could be better/worse than the reported ones due to randomness in RANSAC.

KITTI(Outdoor)

We provide a small script to evaluate Predator on KITTI test set, after configuring KITTI dataset, please run:

python main.py configs/test/kitti.yaml

the results will be saved to the log file.

Custom dataset

We have a few tips for train/test on custom dataset

Citation

If you find this code useful for your work or use it in your project, please consider citing:

@InProceedings{Huang_2021_CVPR,
    author    = {Huang, Shengyu and Gojcic, Zan and Usvyatsov, Mikhail and Wieser, Andreas and Schindler, Konrad},
    title     = {Predator: Registration of 3D Point Clouds With Low Overlap},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {4267-4276}
}

Acknowledgments

In this project we use (parts of) the official implementations of the followin works: