ensenso / ros_driver

Official ROS driver for Ensenso stereo cameras.
http://wiki.ros.org/ensenso_driver
BSD 3-Clause "New" or "Revised" License
29 stars 25 forks source link

Question: conversion of disparity_map (sensor_msgs/Image) to stereo_msgs/DisparityImage ROS Message #48

Closed dejongyeong closed 3 years ago

dejongyeong commented 3 years ago

Hi, would like to inquire, by any chance, would it be possible to convert the disparity_map (sensor_msgs/Image) to stereo_msgs/DisparityImage.

Thanks and looking forward to hear back from the community :D

yguenduez commented 3 years ago

Hello Dejong, that would be the right way to go, but unfortunately that would be minor breaking change, because people get the disparity image as sensor_msgs/Image right now. Otherwise they have to get the sensor_msgs/Image out of the stereo_msgs/DisparityImage with an indirection. I could implement it with a breaking change (version bump to 2.x.x), if nobody has anything against it. But the meta information given in the stereo_msgs/DisparityImage is only for computing the point cloud, which you also can receive directly within the same action.

dejongyeong commented 3 years ago

@yguenduez thanks for the information. I would like to know what is the conversion involved as I am quite new to ROS. Thanks.

yguenduez commented 3 years ago

Do you mean the conversion from disparities to Z-Coordinates (or 3D Points), or the conversion between the two message types? If you meant the message types (ROS-related), you will have to fill the sensor_msgs/Image disparity map into the stereo_msgs/DisparityImage by assigning its image member. For example in C++

...
sensor_msgs::Image disparityMapFromCamera = ...; // get the depth image via the current action, coming as sensor_msgs/Image
stereo_msgs::DisparityImage disparityImage;
disparityImage.image = disparityMapFromCamera;
...

But currently the Meta Infos are missing, like the min and max disparity or the baseline/focal length involved, defined in the stereo_msgs/DisparityImage.

dejongyeong commented 3 years ago

@yguenduez apologies for the ambiguous question. For clarification, I meant the conversion between the two message types.

Understood. We need to fill in the meta info ourself based on the camera model right. But how could I know or calculate the max-max disparity? could it be found on the camera's documentation?

Thanks.

yguenduez commented 3 years ago

@dejongyeong No worries. Yes that is true. For example the baseline of the camera can be found here. The min and max disparities are defined in the parameter settings of the stereo camera. They can be computed from here. We supply the minimum disparities and the number of disparities. The max disparities would be simply

max_disparities = num_disparites+min_disparites

Generally all camera related parameters and settings can be found in this tree. Usually you set up the camera in NxView, save a configuration file as json file, and load this json file within your node in the settings parameter. This step is explained here.

Furthermore the json file has the same structure as the tree in the manual linked above.

dejongyeong commented 3 years ago

@yguenduez much appreciated for the insightful information and quick response. I will look into the attached links and understand the information. I will close the issue now and will re-open if I encounter some issues that need clarifications.

Thanks for the information once again.