Closed Xiaxia1997 closed 1 year ago
The points are given in the lidar coordinate system. the poses in the coordinate system of the camera. Thus, one first needs to go from lidar to camera, apply the camera poses, and then back to lidar coordinates to have the points again in the lidar coordinate system.
hope that helps.
@jbehley So the camera poses means current camera position respect to the initial camera position.(From here). In this way, we get the points from current lidar coordinate system to initial lidar coordinate. Did I get it? Since from here, the writer explain the the pose as transformation from lidar to world coordinate system, I'm confused about it.
If you have further issues with the transformation or need some additional explanation, let me know (and re-open the issue). However, I close the issue for now.
From https://github.com/PRBonn/semantic-kitti-api/issues/78, we can get The poses.txt is given in the camera coordinate system, Tr is the extrinsic calibration matrix from velodyne to camera. In this case,
Pose_velodyne = Tr_inv * Pose_camera
. But the code isposes.append(np.matmul(Tr_inv, np.matmul(pose, Tr)))
which meanspose = Tr_inv * Pose_camera *Tr
. I'm confused about it.