carnegierobotics / multisense_ros

ROS Wrapper for LibMultiSense
Other
7 stars 18 forks source link

Clarity regarding the computation of Stereo point cloud #96

Open hangzu-tech opened 4 months ago

hangzu-tech commented 4 months ago

I have a clarification to make as follows:

I am susbcribing to the the topic /multisense/image_points2 - described as Grayscale stereo point cloud. Each point contains 4 fields (x, y, z, lumanince). in the documentation.

  1. So is this stereo point cloud aligned to the left image?
  2. Is the intensity (lumanince) value for each point taken directly from the corresponding pixel in left image?
  3. Can this be changed to align to the right image?

Thanks.

mattalvarado commented 4 months ago

Thanks for reaching out @hangzu-tech

  1. The stereo point cloud published by the ROS driver is the result of a direct conversion between the MultiSense disparity image. The disparity image is computed using the left rectified image as the source. Details on that conversion can be found here https://docs.carnegierobotics.com/docs/cookbook/overview.html#reproject-disparity-images-to-3d-point-clouds .
  2. The luminance for each point is taken directly from the left rectified image pixel corresponding to the points disparity pixel.
  3. There is currently no way to have MultiSense use the right image as the disparity computation source. Could you provide more detail on your application, and why you are looking for disparity computed from the right rectified image?