Closed EryiXie closed 1 year ago
The transformation matrix is an identity matrix because in our case, we use subscans generated from the parent scans in 3RScan and hence, they are already aligned as in their transformation matrix should be identity. Could you maybe describe your process about reproducing our results a bit more and direct me to which result you’re unable to reproduce?
The values might vary a bit depending on the subscan generation process but should be reflective of our results in the paper.
The transformation matrix is an identity matrix because in our case, we use subscans generated from the parent scans in 3RScan and hence, they are already aligned as in their transformation matrix should be identity. Could you maybe describe your process about reproducing our results a bit more and direct me to which result you’re unable to reproduce?
The values might vary a bit depending on the subscan generation process but should be reflective of our results in the paper.
Hi, thank you for the reply. I asked about the transformation between source and target subscans, because I want to use SGAligner for point cloud registration between overlapped subscans.
I believe it is probably needed to augment an additional random transformation between the already aligned source and target subscans as the "ground truth".
However, I didn't find the implementation that represents "adding additional transformation" in "inference_align_reg.py". If the identity matrix is taken as the transformation between source and target, the registration error I obtained is hugely lower than what shown in the paper. Since the two scans are aligned already as you said.
That's why I am asking if I missed something, or there will be an update to add "random transformation augmentation" later?
Thank you in advance.
Hello, I am a little bit confused about the ground truth transformation between the source and reference point clouds, by running the inference code with the default eye matrix as gt_transform, I can't reproduce the result given in the paper. Will there be an update to add the code for the exact generation method and margin thresholds of the gt_transform in the future? Thank you very much.
https://github.com/sayands/sgaligner/blob/49ae3e1398e369557878af7c45252817a7abe72f/src/inference/sgaligner/inference_align_reg.py#L153C21-L153C62