stereolabs / zed-ros-wrapper

ROS wrapper for the ZED SDK
https://www.stereolabs.com/docs/ros/
MIT License
448 stars 392 forks source link

Depth Accuracy for Point Clouds #452

Closed astha736 closed 4 years ago

astha736 commented 5 years ago

Hi,

I am using a zed camera to get Point Clouds for objects after segmentation. But I the point cloud received is very distorted in depth, most of the points of the objects have depth value much greater than the actual values. Please find pictures below.

For my project, I need to extract a point cloud of detected objects for further processing, therefore a good point cloud is crucial and if possible should use only one view point. Is there any way I can reduce the distortions in depth?

Thank you.

Regards, Astha

'''' System Infomation: Ubuntu 18.04 Intel® Core™ i7-6700HQ CPU @ 2.60GHz × 8 GeForce GTX 1070/PCIe/SSE2 '''' Test: Point cloud extracted for an object(cup) using mask.

a) The horizontal axis is the depth. The image shows the distribution of the points extracted corresponding to the object. screenshot-1563793276

b) Another image of the same point cloud taken from a different angle for better visualization. screenshot-1563793343

Myzhar commented 5 years ago

Hi @astha736 Can you please add information about the configuration that you are using for the ROS wrapper?

astha736 commented 5 years ago

Hi,

By that I am assuming the configurations in params/common.yaml file. Please find the parameters below, let me know if you meant something else.

The change that I have made in this file is setting frame_rate to 15 instead of 30.

`

- auto_exposure:              true                                # Dynamic
- exposure:                   100                                 # Dynamic
- gain:                       100                                 # Dynamic
- confidence:                 100                                 # Dynamic
- mat_resize_factor:          1.0                                 # Dynamic
- point_cloud_freq:           10.0                                # Dynamic - frequency of the pointcloud publishing (equal or less to `frame_rate` value)

- general:
-     camera_flip:                false
-     zed_id:                     -1
-     serial_number:              0
-     resolution:                 2                                   # '0': HD2K, '1': HD1080, '2': HD720, '3': VGA
-     frame_rate:                 15                                   # default was 30
-     gpu_id:                     -1
-     base_frame:                 'base_link'                         # must be equal to the frame_id used in the URDF file
-     camera_frame:               'zed_camera_center'                 # must be equal to the frame_id used in the URDF file
-     left_camera_frame:          'zed_left_camera_frame'             # must be equal to the frame_id used in the URDF file
-     left_camera_optical_frame:  'zed_left_camera_optical_frame'     # must be equal to the frame_id used in the URDF file
-     right_camera_frame:         'zed_right_camera_frame'            # must be equal to the frame_id used in the URDF file
-     right_camera_optical_frame: 'zed_right_camera_optical_frame'    # must be equal to the frame_id used in the URDF file
-     verbose:                    true
-     svo_compression:            4                                   # `0`: RAW (no compression), `1`: LOSSLESS (PNG/ZSTD), `2`: LOSSY (JPEG), `3`: AVCHD (H264 SDK v2.7), `4`: HEVC (H265 SDK v2.7)
-     self_calib:                 true                                # enable/disable self calibration at starting
- 
- video:
-     rgb_topic_root:             'rgb'                               # default `rgb/image_rect_color`, `rgb/camera_info`, `rgb_raw/image_raw_color`, `rgb_raw/camera_info`
-     left_topic_root:            'left'                              # default `left/image_rect_color`, `left/camera_info`, `left_raw/image_raw_color`, `left_raw/camera_info`
-     right_topic_root:           'right'                             # default `right/image_rect_color`, `right/camera_info`, `right_raw/image_raw_color`, `right_raw/camera_info`
-     stereo_topic_root:          'stereo'                            # default `stereo/image_rect_color`, `stereo/camera_info`, `stereo_raw/image_raw_color`, `stereo_raw/camera_info`
-     color_enhancement:          true                                # [FUTURE USE] This parameter enhances color spreading on R/G/B channel and increase gamma correction on black areas for a better gray segmentation in black areas. Recommended for computer's vision applications.

- depth:
-     quality:                    1                                   # '0': NONE, '1': PERFORMANCE, '2': MEDIUM, '3': QUALITY, '4': ULTRA
-     sensing_mode:               0                                   # '0': STANDARD, '1': FILL
-     depth_stabilization:        1                                   # `0`: disabled, `1`: enabled
-     openni_depth_mode:          0                                   # '0': 32bit float meters, '1': 16bit uchar millimeters
-     depth_topic_root:           'depth'                             # default `depth/depth_registered` or `depth/depth_raw_registered` if `openni_depth_mode` is true
-     point_cloud_topic_root:     'point_cloud'
-     disparity_topic:            'disparity/disparity_image'
-     confidence_root:            'confidence'                        # default `confidence/confidence_image` and `confidence/confidence_map`

- tracking:
-     publish_tf:                 true                                # publish `odom -> base_link` TF
-     publish_map_tf:             true                                # publish `map -> odom` TF
-     world_frame:                'map'                               # the reference fixed frame (same as `map_frame` or `odometry_frame`)
-     map_frame:                  'map'
-     odometry_frame:             'odom'
-     odometry_db:                ''
-     spatial_memory:             true                                # Enable to detect loop closure
-     floor_alignment:            false                               # Enable to automatically calculate camera/floor offset
-     initial_base_pose:          [0.0,0.0,0.0, 0.0,0.0,0.0]          # [X, Y, Z, R, P, Y]
-     pose_topic:                 'pose'
-     publish_pose_covariance:    true                                # Enable to publish the `pose_with_covariance` message
-     fixed_covariance:           false                               # Set the covariance for pose and odometry to a diagonal matrix with `fixed_cov_value` on the diagonal
-     fixed_cov_value:            1e-6                                # Value used on the diagonal of the fixed covariance matrix (`fixed_covariance -> true`)
-     odometry_topic:             'odom'
-     init_odom_with_first_valid_pose: true                           # Enable to initialize the odometry with the first valid pose
-     path_pub_rate:              2.0                                 # Path positions punlishing frequency
-     path_max_count:             -1                                  # use '-1' for unlimited path size
-     two_d_mode:                 false                               # Force navigation on a plane. If true the Z value will be fixed to "fixed_z_value", roll and pitch to zero
-     fixed_z_value:              1.0                                 # Value to be used for Z coordinate if `two_d_mode` is true

- mapping:
-     mapping_enabled:            false                               # True to enable mapping and fused point cloud pubblication
-     resolution:                 1                                   # `0`: HIGH, `1`: MEDIUM, `2`: LOW
-     fused_pointcloud_freq:      1.0                                 # frequency of the publishing of the fused colored point cloud

`

Myzhar commented 5 years ago

The configuration is correct. Can you post a screen shoot of the RGB image? Just to have idea of the kind of scene you are acquiring.

astha736 commented 5 years ago

cup_segmentation

Sorry I don't have the one corresponding to the above point cloud. But above image is almost the same, with the cup at the same position and distance from the camera.

Myzhar commented 5 years ago

The cup is really near to the camera. What is its distance?

Myzhar commented 5 years ago

To improve the quality of the depth map you can set the quality parameter in common.yaml to 4 (ULTRA MODE). Please remember that the minimum distance that the ZED camera can calculate is 30 cm (min_depth param in zed.yaml)

astha736 commented 5 years ago

The cup is ~50 cm away from the camera. I have also tested for configuration: quality=4 in common.yaml file. Please find the results attached below. The quality has increased (less no of points with distorted depth).

screenshot-1563806334

screenshot-1563806351

I have few more questions: 1) Is there any other fix I could try? 2) What is the role of sensing_mode? 3) Could you recommend me, some other (maybe algorithmic) solution?

Myzhar commented 5 years ago
  1. You could "play" with confidence runtime using the Dynamic Reconfigure: https://www.stereolabs.com/docs/ros/#dynamic-reconfigure
  2. sensing_mode is made mainly for Virtual and Augmented reality to fill the holes in depth map: https://www.stereolabs.com/docs/api/group__Depth__group.html#ga391147e2eab8e101a7ff3a06cbed22da
  3. One of the problem of the cup you are using is that it is "white" and mostly homogeneous, so information is lost because the stereo matching algorithm cannot find unique matching portion of the left image to the right image. Do you have similar problems with different objects?
astha736 commented 5 years ago

Yes, I am facing same issue(at times worse) with other objects as well. For example below are pictures of a fairly opaque purple bottle. The bottle is also ~50 cm away and these are images of pcd's at different time instance(every ~2 seconds).

A very small proportion of the points are at the actual depth, a small purple cloud on the left side of the image is the actual position of the bottle.

screenshot-1563808079 screenshot-1563808086 screenshot-1563808088

Myzhar commented 5 years ago

It's not easy to understand what the pointclouds are showing without a 2D RGB image.

astha736 commented 5 years ago

yes, I apologize, please find attached rgb and screentshot of point-clouds below.

1.1) For quality = 1 (pictures below correspond to a mild case, the pictures in previous comment were of a case with strong distortion, it was the same bottle with almost same distance and view point, I am sorry I don't have a RGB image corresponding to it. ) screenshot_bottle_q1 1.2) The small purple cluster is the actual position of the object bottle_q1_1 1.3) Screenshot after rotating the point cloud from left to right (looking from the left side). The small purple cluster (left) above corresponds to the inner point cloud and the larger cluster above corresponds to the outer point cloud (cap and boundary points) bottle_q1_2 1.4) bottle_q1_3

2.1) For quality=4 screenshot_bottle_q4 2.2) left cluster is the main body and right cluster is the boundary bottle_q4_1 2.3) Screenshot after rotating the point cloud from left to right (looking from the left side) bottle_q4_2 2.4) bottle_q4_3

Myzhar commented 5 years ago

The point cloud seems quite good. Can you explain me better what it the problem? I mainly need to understand what value are you expecting and what values you are getting.

astha736 commented 5 years ago

Hi, please find attached point-cloud for same bottle as above for depth quality=1.

I hope this explains the problem in more detail. P1 is the actual position of the bottle. P2 is the point cloud which contains most of the points of the bottle(distorted depth). The depth quality definitely increases for quality=4(above) but for quality=1 the point-cloud is very distorted.

Img1 Img2 Img3

Thus is there any way to or avoid such distorted point-clouds using the ros wrapper apart from depth quality parameter?

Myzhar commented 5 years ago

Try to reduce the Confidence threshold (you can use Dynamic reconfigure) to cut away the depth points with minor confidence value.

Myzhar commented 4 years ago

The new version of the SDK improves the depth accuracy. A new parameter is available to filter surface with little texture information: depth_texture_conf