Closed dejongyeong closed 3 years ago
Hello Dejong, that would be the right way to go, but unfortunately that would be minor breaking change, because people get the disparity image as sensor_msgs/Image right now. Otherwise they have to get the sensor_msgs/Image out of the stereo_msgs/DisparityImage with an indirection. I could implement it with a breaking change (version bump to 2.x.x), if nobody has anything against it. But the meta information given in the stereo_msgs/DisparityImage is only for computing the point cloud, which you also can receive directly within the same action.
@yguenduez thanks for the information. I would like to know what is the conversion involved as I am quite new to ROS. Thanks.
Do you mean the conversion from disparities to Z-Coordinates (or 3D Points), or the conversion between the two message types? If you meant the message types (ROS-related), you will have to fill the sensor_msgs/Image disparity map into the stereo_msgs/DisparityImage by assigning its image member. For example in C++
...
sensor_msgs::Image disparityMapFromCamera = ...; // get the depth image via the current action, coming as sensor_msgs/Image
stereo_msgs::DisparityImage disparityImage;
disparityImage.image = disparityMapFromCamera;
...
But currently the Meta Infos are missing, like the min and max disparity or the baseline/focal length involved, defined in the stereo_msgs/DisparityImage.
@yguenduez apologies for the ambiguous question. For clarification, I meant the conversion between the two message types.
Understood. We need to fill in the meta info ourself based on the camera model right. But how could I know or calculate the max-max disparity? could it be found on the camera's documentation?
Thanks.
@dejongyeong No worries. Yes that is true. For example the baseline of the camera can be found here. The min and max disparities are defined in the parameter settings of the stereo camera. They can be computed from here. We supply the minimum disparities and the number of disparities. The max disparities would be simply
max_disparities = num_disparites+min_disparites
Generally all camera related parameters and settings can be found in this tree. Usually you set up the camera in NxView
, save a configuration file as json file, and load this json file within your node in the settings
parameter. This step is explained here.
Furthermore the json file has the same structure as the tree in the manual linked above.
@yguenduez much appreciated for the insightful information and quick response. I will look into the attached links and understand the information. I will close the issue now and will re-open if I encounter some issues that need clarifications.
Thanks for the information once again.
Hi, would like to inquire, by any chance, would it be possible to convert the
disparity_map (sensor_msgs/Image)
tostereo_msgs/DisparityImage
.Thanks and looking forward to hear back from the community :D