Open smaghsoudi opened 5 years ago
My guess is that since the point cloud information has its origin on the left imager of the realsense, I only need to use the intrinsic information of the left imager to get the extrinsic parameters between the lidar and the left imager. Does this make sense?
Ideally to get the transform between a camera and a LiDAR you only require the intrinsics of the said camera.
I have a Velodyne and Realsense RGB-D camera and was wondering about which set of intrinsic parameters I should use. Do I need to perform this calibration for the RGB lens and each of the two Depth cameras?