XuyangBai / D3Feat

[TensorFlow] Official implementation of CVPR'20 oral paper - D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features https://arxiv.org/abs/2003.03164
MIT License
259 stars 38 forks source link

A question about 'voxel_size' #45

Closed QWTforGithub closed 2 years ago

QWTforGithub commented 2 years ago

Hi, I noticed that the downsampling of training 3DMatch is 0.03 (first_subsampling_dl = 0.03). Is the default downsample of 0.03 also used when testing? In general, when we test the model, the training and the testing are usually set up the same way, right? For example, if I train the model with a 0.025 downsample setting, I should also test it with a 0.025 downsample setting.

XuyangBai commented 2 years ago

Hi, thanks for your interest. Yes, keeping the same voxel size for training and testing will give the best results in general, otherwise you need some tricks to rescale the kernel points. I use the following code to rescale the kernel points and generalize the model pretrained on 3DMatch to ETH. https://github.com/XuyangBai/D3Feat/blob/476df5362bb398a0104266f4d1598cc54de21712/utils/tester.py#L164-L168

QWTforGithub commented 2 years ago

Hi, thanks for your interest. Yes, keeping the same voxel size for training and testing will give the best results in general, otherwise you need some tricks to rescale the kernel points. I use the following code to rescale the kernel points and generalize the model pretrained on 3DMatch to ETH.

Thank you!