zhulf0804 / GCNet

Leveraging Inlier Correspondences Proportion for Point Cloud Registration. https://arxiv.org/abs/2201.12094.
MIT License
103 stars 12 forks source link

Test and visualize on custom data #4

Closed ttsesm closed 2 years ago

ttsesm commented 2 years ago

Hi, thanks for sharing your work. Is it easy to give some guidance how to test/evaluate the pre-trained models on two individual given point clouds and/or possibly to custom data?

zhulf0804 commented 2 years ago

Thanks for your interest.

Yes, i have tried what you referred before.

I'm sorry i can't update the code in time for i am busy in other tasks.

And i'll update it as soon as possible.

Best regards.

ttsesm commented 2 years ago

Sure, I was thinking more or less something similar to this https://github.com/yewzijian/RegTR/blob/main/src/demo.py which is a comparative work to yours.

Thanks.

zhulf0804 commented 2 years ago

Yes, I also think RegTR is a very excellent work in point cloud registration.

Also, the code style from Zi Jian Yew is worth learning.

Good luck.

ttsesm commented 2 years ago

One more question, going through the paper as well as the source code it is not clear to me how you use the transformation and the correspondence points. If I understand it correctly you use the correspondence points in the circle loss and the transformation matrices in the overlap and saliency losses. Also you compute the overlap and saliency scores on the fly. Is that correct? Can you confirm/elaborate a bit on this?

Btw, what is your opinion for the Geotransformer and the Lepard in regards to your approach?

Thanks.

zhulf0804 commented 2 years ago

In fact, correspondences are obtained through the ground truth transformation matrix, as implemented in function get_correspondences. The details on how to use the correspondences and transformation can be seen in loss implementation. But I think it's more important to understand how to define overlap and saliency for supervised learning.

The second question is very interesting. The following are my personal views.

  1. Both Lepard and GeoTransformer are CVPR22 Oral, so they should be outstanding works.
  2. The highlight of Lepard is to focus on both rigid and non-rigid registration. I didn't learn much about the details.
  3. GeoTransformer extends CoFiNet (NeurIPS21) with geometric Transformer, solving registration in an end-to-end manner. RegTR (you referred to above) also proposes an end-to-end Transformer-based registration method. I think the task is challenging and the results are amazing. Nice work.
  4. Our work proposes several universal mechanisms (ms + voting, GGE module) to boost registration performance on different registration networks(FCGF, D3Feat, PREDATOR, etc.). In the paper, we extend PREDATOR-style network with the proposed universal mechanisms as our network.
ttsesm commented 2 years ago

Thanks for the elaboration.

zhulf0804 commented 2 years ago

Hi, thanks for sharing your work. Is it easy to give some guidance how to test/evaluate the pre-trained models on two individual given point clouds and/or possibly to custom data?

Hi, sorry to the late update.

A demo for testing on custom point cloud pair is provided here, and unseen scene data with the same density (as the pretrained dataset) is supported.

For point cloud with different densities, we find KPConv-based architecture (such as this work) may not generalize well. If KPConv is replaced with MinkowskiEngine, the network generalizes better. <\del>

Updates:

Testing data with different voxel size from the pre-trained dataset is supported now.