Open poornimajd opened 1 day ago
Hi @poornimajd. The /multisense/image_points2_color is a version of the point cloud which uses this routine (https://docs.carnegierobotics.com/docs/cookbook/overview.html#create-a-color-3d-point-cloud) to colorize each 3d point with the aux image. This is not quite what you want, since I assume you are looking for the depth of objects you detected in the aux camera. A possible solution can be to exploit the approximation outlined here (https://docs.carnegierobotics.com/docs/cookbook/overview.html#approximation-for-execution-speed) and apply an extrinsics shift to the depth image in the left rectified coordinate frame to transform it into the aux rectified coordinate frame. Once you have a depth image in the aux camera coordinate frame you can perform direct point/depth lookups for any of your detections. You can use the Tx value computed from the aux project matrix (https://docs.carnegierobotics.com/docs/calibration/stereo.html#p-matrix) for this extrinsics transformation
You are correct that the main purpose of the aux camera is to provide color images for various ML detection models. The aux camera also has a wider FOV lens which can help for certain detection applications.
Thank you for the detailed reply. Yes I need the depth of the objects detected in the aux image.
So If I understand correctly, is the following pipeline right? Could you please confirm it once for me?
1) Subscribe to "/multisense/image_points2" and subscribe to the left rectified image. 2) create the auxilary image: using the formulae in (https://docs.carnegierobotics.com/docs/cookbook/overview.html#approximation-for-execution-speed). 3) The different terms in the formulae can be obtained as:
So once the terms are obtained according to 3), I can use the auxillary image created in 2) as an input to the model and the corresponding point cloud will be "/multisense/image_points2" .
I also wanted to verify: the rectified images on all topics are undistorted, right?
Hello, I'm using a color image as input for my model, so I've subscribed to "/multisense/aux/image_rect_color." I also require the corresponding point cloud. Should I use "/multisense/image_points2_color" for this purpose? Just as "/multisense/image_points2" — the grayscale point cloud — is aligned with the left rectified image, I assume "/multisense/image_points2_color" should align with "/multisense/aux/image_rect_color," correct?
Additionally, there isn't much information available about the auxiliary camera. Is its sole purpose to provide color images, or does it offer any other advantages?