WHU-USI3DV / SGHR

[CVPR 2023] Robust Multiview Point Cloud Registration with Reliable Pose Graph Initialization and History Reweighting
190 stars 15 forks source link
multiview-registration paper point-cloud-registration

😍 SGHR

Robust Multiview Point Cloud Registration with Reliable Pose Graph Initialization and History Reweighting

CVPR 2023

Haiping Wang*,1, Yuan Liu*,2, Zhen Dong†,1, Yulan Guo3, Yu-Shen Liu4, Wenping Wang5 Bisheng Yang†,1

1Wuhan University    2The University of Hong Kong    3Sun Yat-sen University   
4Tsinghua University    5Texas A&M University   
*The first two authors contribute equally.    Corresponding authors.   

In this paper, we present a new method for the multiview registration of point cloud. Previous multiview registration methods rely on exhaustive pairwise registration to construct a densely-connected pose graph and apply Iteratively Reweighted Least Square (IRLS) on the pose graph to compute the scan poses. However, constructing a densely-connected graph is time-consuming and contains lots of outlier edges, which makes the subsequent IRLS struggle to find correct poses. To address the above problems, we first propose to use a neural network to estimate the overlap between scan pairs, which enables us to construct a sparse but reliable pose graph. Then, we design a novel history reweighting function in the IRLS scheme, which has strong robustness to outlier edges on the graph. In comparison with existing multiview registration methods, our method achieves $11$\% higher registration recall on the 3DMatch dataset and $\sim13$\% lower registration errors on the ScanNet dataset while reducing $\sim70$\% required pairwise registrations. Comprehensive ablation studies are conducted to demonstrate the effectiveness of our designs.

| Paper | Poster | Video |

πŸ†• News

✨ Pipeline

Network

πŸ’» Requirements

Here we offer the YOHO backbone SGHR. Thus YOHO requirements need to be met:

Specifically, The code has been tested with:

πŸ”§ Installation

πŸ’Ύ Dataset & Pretrained model

The datasets are accessible in BaiduDesk(Code:oouk) and Google Cloud:

Trainset:

Testset:

Datasets above contain the point clouds (.ply), keypoints (.txt, 5000 per point cloud), and rotation-invariant yoho-desc(.npy, extracted on the keypoints) files. Please place the data to ./data following the example data structure as:

data/
β”œβ”€β”€ 3dmatch/
    └── kitchen/
        β”œβ”€β”€ PointCloud/
            β”œβ”€β”€ cloud_bin_0.ply
            β”œβ”€β”€ gt.log
            └── gt.info
        β”œβ”€β”€ yoho_desc/
            └── 0.npy
        └── Keypoints/
            └── cloud_bin_0Keypoints.txt
β”œβ”€β”€ 3dmatch_train/
β”œβ”€β”€ scannet/
└── ETH/

πŸš… Train

You can train SGHR with the 3dmatch_train dataset downloaded above, where we offer the 32-dim rotation-invariant yoho-desc we extracted on 3dmatch_train and you can also extract 32-dim invariant yoho-desc(row-pooling on yoho-desc) yourself and save the features to '''data/3dmatch_train/\<scene>/yoho_desc'''. Then, you can train SGHR with the following commond:

python Train.py

✏️ Use SGHR in the easiest way!

Use SGHR is quite simple, prepare your point cloud files and no other effort needed! Follow here.

✏️ Test

Try SGHR on the demo files by:

python demo.py --pcdir data/demo

To evalute SGHR on 3DMatch and 3DLoMatch, you can use the following commands:

# extract global features
python Test.py --dataset 3dmatch
# conduct multiview registration
python Test_cycle.py --dataset 3dmatch --rr
# visualize the registration results
python visual.py --dataset 3dmatch

To evalute SGHR on ScanNet, you can use the following commands:

python Test.py --dataset scannet
python Test_cycle.py --dataset scannet --ecdf

To evalute SGHR on ETH, you can use the following commands:

python Test.py --dataset ETH
python Test_cycle.py --dataset ETH --topk 6 --inlierd 0.2 --tau_2 0.5 --rr

πŸ’‘ Citation

Please consider citing SGHR if this program benefits your project

@inproceedings{
wang2023robust,
title={Robust Multiview Point Cloud Registration with Reliable Pose Graph Initialization and History Reweighting},
author={Haiping Wang and Yuan Liu and Zhen Dong and Yulan Guo and Yu-Shen Liu and Wenping Wang and Bisheng Yang},
booktitle={Conference on Computer Vision and Pattern Recognition},
year={2023}
}

πŸ”— Related Projects

Take a look at our previous works on feature extraction and pairwise registration!