The-Learning-And-Vision-Atelier-LAVA / BUFFER

[CVPR 2023] BUFFER: Balancing Accuracy, Efficiency, and Generalizability in Point Cloud Registration
MIT License
68 stars 8 forks source link

Pretrained models and train dataset of 3DMatch #1

Open yaorz97 opened 1 year ago

yaorz97 commented 1 year ago

I am interested in your excellent work, and I want to know 1) what are the differences between your processed training dataset of 3DMatch and the training dataset provided by PREDATOR; 2) could you please provide the pre-trained models on 3DMatch to reproduce the results?

aosheng1996 commented 1 year ago

Hi, @Pterosaur-Yao , thanks for your interest in our work!

1) The 3DMatch training set we processed is almost the same as the PREDATOR, the only difference should be the subsampling. To enhance computational efficiency, we subsampled the raw fragments with voxel size of 1.5cm instead of directly using the raw dense fragments.

2) Sorry for the inconvenience, I missed this file in the previous upload. Now I have updated the project.

Best, Sheng

yaorz97 commented 1 year ago

Thank you, Sheng. I'm interested in how the method performs on the rotated 3DMatch dataset, could you please provide a test script or tell me how to modify the test file?

aosheng1996 commented 1 year ago

To evaluate the rotated 3DMatch dataset, maybe you can add an arbitrary rotation for each pair of fragments in https://github.com/aosheng1996/BUFFER/blob/8c5d6c7772e5b8c5ae70e8c568ce1545c7d976d5/ThreeDMatch/dataset.py#L121C15-L121C15 and update the 'relt_pose'.