Closed tatsy closed 2 years ago
Oh, I was sorry about the duplicated question as above. This problem was already pointed out previously. https://github.com/Lilac-Lee/PointNetLK_Revisited/issues/3#issuecomment-989155553
I agree with the above comment. I'm afraid the experiment for 3DMatch in the paper seems invalid.
Hi thanks for your question. The setting for voxelization does rely on the "canonical pose" of the source and the target. I have updated the code in data_utils.py to provide for voxelization after transformation. I also updated README. Cheers.
Hi @Lilac-Lee,
Thank you very much for releasing your project code! I have a question about how the voxel overlaps are found in the testing phase.
As far as I read the code, the following lines are for finding the voxel overlaps, but it seems that the input data, namely
p0
andp1
here, are aligned. https://github.com/Lilac-Lee/PointNetLK_Revisited/blob/c0a87ba6a33ca8744ad647381be1f8f5d4b520cb/data_utils.py#L111-L119Also, it seems that the random rigid transformation is applied after the voxel overlap is found. https://github.com/Lilac-Lee/PointNetLK_Revisited/blob/c0a87ba6a33ca8744ad647381be1f8f5d4b520cb/data_utils.py#L188-L192
According to my understanding, this seems a bit weird because we do not know the ground-truth posture in the testing phase and cannot compute the voxel overlaps in this way.
Excuse me if there is something wrong in my understanding. I'd appreciate if you explain how the voxel overlap is found in the testing phase.