Closed Young-woong-Cho closed 3 years ago
Never mind, I missed the part that the rotation matrix and the translation vector are relative to the initial frame X. I computed the relative pose and it's all good now. I won't delete the thread so that others won't repeat the same mistake as I made. Thank you again!
Hi, thank you for the great work.
As you have mentioned in the paper, the coordinate systems for kitti and for waymo are different. In order to account for the difference, you said you have "transform[ed] these points into a cordinate system centered at the location of the LiDAR sensor in the KITTI setup". I wonder if you had applied any transformation on the ego motion data of waymo-open dataset(i.e. R and t) as well, since the direction of the ego vehicle is along z-axis for the kitti dataset and along x-axis for the waymo dataset. Also, the ego transformation does not necessarily have to be "aligned" with the absolute coordinate frame. If I blindly apply the transformation (that I have used for the transformation of the point clouds) to the ego coordinate, the absolute R is not aligned with the world coordinate; in other words, R is quite different from the identity matrix, I. However, the estimated R matrix as well as the GT R matrix for kitti dataset are "almost aligned with the world coordinate"; namely, they are very close to identity. Have you applied any other tricks to bring the R back to I in the training process? Could you explain a bit more on how you have translated the ego motion from waymo to kitti? Thank you in advance.