stereolabs / zed-ros-wrapper

ROS wrapper for the ZED SDK
https://www.stereolabs.com/docs/ros/
MIT License
447 stars 391 forks source link

Can pointclouds be completely undistorted? #497

Closed lossemotion closed 4 years ago

lossemotion commented 4 years ago

I want to use the ZED‘s pointcloud to mapping, which requires a very accurate pointcloud. I tried the methods mentioned in other issues to improve the quality of the pointcloud, but the point cloud still has some distortions in my best case, such as the edge, and the depth of the wall does not match the actual situation somewhere. Here are the picture of the pointcloud in my best case 2019-11-25 22-22-34屏幕截图 2019-11-25 22-23-03屏幕截图 and the picture of the environment 2019-11-25 22-23-53屏幕截图

I want to know if this cloud point in the official document https://www.stereolabs.com/docs/ros/depth_sensing/ is a really behavior of zed, or has it been post processed? Is there a way for our own ZED to achieve exactly the same effect as the picture? Here is the picture in the official document rviz_pointcloud

Looking forward to reply, thanks!

Myzhar commented 4 years ago

Hi @lossemotion the image that you posted from the documentation is a really behavior of the ZED and has not been post-processed. The only difference with your working environment is in the texture of the surfaces.

The stereo vision processing works matching the visual features in the left image to the same features detected in the right image. An highly textured surface is easy to be triangulate and the result is always very good with it. In your case the surfaces have uniform colors, so there is a leakage of information and the stereo matching processing cannot reach the best performances even if the ZED SDK runs very powerful and sofisticated algorithms to add visual information where visual information are poor. One of the problem with uniformly colored surfaces is that the environment illumination adds information that are not correct and the surfaces result distorted like in your second picture

lossemotion commented 4 years ago

@Myzhar OK, thank you very much for your quick reply. Then I will have a try in the scene with rich texture. And I have a few more questions Are there only the two effective parameters for adjusting the behavior of pointcloud: quality and sensing mode in common.yaml? My setting for the above results is quality 3 and sensing mode 1. Do you have any suggestions for parameters setting in the condition with poor texture? Or there's nothing we can do. I‘m mainly use the pointcloud for accurate identification of obstacles.

Myzhar commented 4 years ago

You should not set sensing_mode to 1. It is a feature added for Virtual Reality. It adds information that does not exists and that are used only for visual. Another parameter you can modify is the confidence level, decreasing it you remove the points that have a poor stereo matching value

lossemotion commented 4 years ago

Oh, I forgot to say that I have set the confidence to 0.8. Um...I do know the meaning of sensing mode. I found there are many holes in the pointcloud when sensing mode is 0 but looks smoother when set to 1. Need I to calibrate my camera's internal parameters and replace the values in the source code?

Myzhar commented 4 years ago

No, you must not do it. The "yaml" parameter files are there for that purpose