Post your theoretical questions / usage questions here.
Thanks a lot for your open resources! I'm doing comparing experiments with your method. And I have a few questions about the experiments on 3DMatch dataset.
1) You said 'The authors provide 5000 randomly sampled keypoints for each scan.' in your paper. However, I can't find any keypoints files from 3DMatch dataset. Could you please give me more information about the keypoints files?
2) Could you tell me the specific process of computing 3DSmoothNet descriptors on 3DMatch dataset? Because you said 'On average, there are 205 pairs of scans per scene (maximum: 519 in the Kitchen scene, minimum: 54 in the Hotel 3 scene).' in your paper. I'm wondering how you get the pair of scans? The raw data are separate point clouds.
The pairs are essentially scans that have overlapping areas, provided in the dataset. We didn't follow the official 3DMatch evaluation procedure, as they also need the method to be able to first identify what pairs can be matched. Since we were testing registration performance, it made sense to just use the pairs provided in the dataset.
Have you read the documentation?
Post your theoretical questions / usage questions here.
Thanks a lot for your open resources! I'm doing comparing experiments with your method. And I have a few questions about the experiments on 3DMatch dataset.
1) You said 'The authors provide 5000 randomly sampled keypoints for each scan.' in your paper. However, I can't find any keypoints files from 3DMatch dataset. Could you please give me more information about the keypoints files?
2) Could you tell me the specific process of computing 3DSmoothNet descriptors on 3DMatch dataset? Because you said 'On average, there are 205 pairs of scans per scene (maximum: 519 in the Kitchen scene, minimum: 54 in the Hotel 3 scene).' in your paper. I'm wondering how you get the pair of scans? The raw data are separate point clouds.