Closed QWTforGithub closed 3 years ago
Hi, I do it in a tricky way: I use the rotated point cloud as input, then I regard the output descriptors as the local descriptor of the original point cloud. In this way, I do not need to modify the ground truth transformation and the result is similar with apply transformation on the original point cloud and the GT pose.
Hi, I do it in a tricky way: I use the rotated point cloud as input, then I regard the output descriptors as the local descriptor of the original point cloud. In this way, I do not need to modify the ground truth transformation and the result is similar with apply transformation on the original point cloud and the GT pose.
Thank you for your reply. I still want to make sure that you first rotate the input point clouds P and Q at the same random Angle, and then use that as input to the network and extract the corresponding descriptor, right? In this way, Ground Truth does not need to be changed. So each pair of record pairs in Ground Truth operates in this way, right?
Hi, Let me make it more clear. For feature matching recall, we only need the ground truth correspondences between two point clouds, instead of the transformation matrix, and no matter how we rotation the point cloud, the GT correspondence relationship does not change. And my evaluation step follows such idea: suppose we have point cloud P, Q and ground truth transformation between them T.
I choose to do in this way because the changing of the transformation matrix is kind of error-prone. However, a more formal way might be rotation two-point clouds and change the GT transformation matrix accordingly and use the updated matrix to compute the GT corresponences and do the evaluation. But It should be the same.
Hi, Let me make it more clear. For feature matching recall, we only need the ground truth correspondences between two point clouds, instead of the transformation matrix, and no matter how we rotation the point cloud, the GT correspondence relationship does not change. And my evaluation step follows such idea: suppose we have point cloud P, Q and ground truth transformation between them T.
- Randomly rotate P and Q separately to get P' and Q' and use our network to get the feature F_P' and F_Q'
- Use the ground truth correspondence established by P and Q to evaluate F_P' and F_Q'
I choose to do in this way because the changing of the transformation matrix is kind of error-prone. However, a more formal way might be rotation two-point clouds and change the GT transformation matrix accordingly and use the updated matrix to compute the GT corresponences and do the evaluation. But It should be the same.
Thanks for your reply again. Can I take it this way? P and Q are a pair of point clouds and a corresponding Ground Truth. First, P and Q are rotated together at different random angles to give P' and Q'. Then the descriptors f_P' and f_Q' of P' and Q' are extracted. Finally, P and Q and Ground Truth are still used to verify f_P' and f_Q' Is my understanding correct? Thank you sincerely again.
Yes, exactly.
Yes, exactly.
Thank you very much! Best wish.
I'd like to do some comparative experiments based on your work. I saw that your FMR on 3DMatch had a "rotated" result. I wanted to use the same setting as your paper in terms of getting the results from 'rotated'. How did you get this result? Is the original point cloud applied to a random 4 x 4 matrix, and the Ground Truth also transformed accordingly?
Looking forward to your reply, thank you.