I am susbcribing to the the topic /multisense/image_points2 - described as Grayscale stereo point cloud. Each point contains 4 fields (x, y, z, lumanince). in the documentation.
So is this stereo point cloud aligned to the left image?
Is the intensity (lumanince) value for each point taken directly from the corresponding pixel in left image?
The luminance for each point is taken directly from the left rectified image pixel corresponding to the points disparity pixel.
There is currently no way to have MultiSense use the right image as the disparity computation source. Could you provide more detail on your application, and why you are looking for disparity computed from the right rectified image?
I have a clarification to make as follows:
I am susbcribing to the the topic
/multisense/image_points2
- described as Grayscale stereo point cloud. Each point contains 4 fields (x, y, z, lumanince). in the documentation.Thanks.