Closed syqzdy closed 5 years ago
Hi @syqzdy, I'm not sure if this is what you're asking, but if you're asking how to interpret the groundtruths, [t1, t2, t3] are the groundtruth relative translations and [q1, q2, q3, q4] are the groundtruth relative rotations (quaternion representation).
A MATLAB example on how to interpret the transformation can be found in scripts/show_alignment.m
.
I know the meaning of [t1,t2,t3] and [q1,q2,q3,q4], but I don't know how can I get them? If I have two point clouds how can I compute [t1,t2,t3] and [q1,q2,q3,q4] ? use PCL? Thanks!
I computed the groundtruths using the ICP algorithm in Matlab using GPS/INS pose as initialization.
If you're referring to our algorithm, first use inference.py
to compute the feature keypoint and descriptors. You can then compute the relative transformation using standard RANSAC pipeline (or you can use the script we provided scripts/computeAndVisualizeMatches.m
.
Thank you!Can you release the code using ICP algorithm to compute the groundtruths?
I used the pcregrigid()
function in MATLAB, which takes as input the two point clouds.
thanks you so much!
Do you get groundtruth transform on your data? How to do it? thanks
@ltj95 The groundtruth transformation for the training set is computed from the GPS/INS that comes with the Oxford robotcar dataset. Note that GPS/INS has a certain amount of inaccuracy so you will not get perfectly accurate correspondences. For the evaluation test set, we run through a step of ICP to refine the transformation, and manually verify that it is correct.
I download the test_models including a groundtruth.txt. Each pair of point cloud has t1, t2, t3, q1, q2, q3, q4, how can I get it? Thanks!