gazebosim / gazebo-classic

Gazebo classic. For the latest version, see https://github.com/gazebosim/gz-sim
http://classic.gazebosim.org/
Other
1.18k stars 479 forks source link

Depth sensor noise model #2125

Open osrf-migration opened 7 years ago

osrf-migration commented 7 years ago

Original report (archived issue) by Pramuditha Aravinda (Bitbucket: aravindadp).


Currently depth sensor do not have a random noise model and outputs perfect measurements.

It would be ideal if depth sensor also can have a Gaussian like noise model for the measurements.

osrf-migration commented 4 years ago

Original comment by Martin Pecka (Bitbucket: peci1).


Asked also here: [https://osrf-migration.github.io/gazebo-gh-pages/#!/osrf/gazebo/issues/2450/adding-noise-to-depth-sensor (#2450)](https://osrf-migration.github.io/gazebo-gh-pages/#!/osrf/gazebo/issues/2450/adding-noise-to-depth-sensor (#2450)) .

osrf-migration commented 4 years ago

Original comment by Martin Pecka (Bitbucket: peci1).


You can check out my workaround in https://github.com/peci1/gazebo_noisy_depth_camera (also Michal Staniaszek (heuristicus) ).

Javier Iván Choclin (Javier Choclin) do you think you could help getting the OpenGL shader working? So far I only got CPU implementation working. I’ve seen you’re the one who added DepthCamera to ignition gazebo, so I hope you could get some quick insights. An attempt to get OpenGL noise working is in branch opengl . But all I get as soon as I enable the custom compositor is just gray or black images…

osrf-migration commented 4 years ago

Original comment by Javier Iván Choclin (Bitbucket: Javier Choclin).


Hi Martin Pecka (peci1) , unfortunately, I am no longer collaborating with Gazebo as I was doing before since I started a different project, but I will gladly make some comments to help you if I can. Keep in mind that this feature is already implemented in ignition sensors and it will be available on gazebo 11. If you want to contribute to previous versions of Gazebo, which are going to have support for several years, as far as I know, you can use what it was done for the Render Noise Pass.

The correct approach, in my opinion, would be to do something similar in DepthCamera, since it can not be solved using a different shader for opengl in DepthCamera, I don’t remember why we can not use the shader to add the noise. A quick solution could be to iterate over the data in the sensor and add the noise there as it is done on the Lidar. What you did is similar, but if that is the case, I would actually work with the output data without using opengl and adding the noise here.

osrf-migration commented 4 years ago

Original comment by Martin Pecka (Bitbucket: peci1).


Thanks for the comment, Javier.

I’ve looked at the noise implementation in ignition-sensors, but I’m not convinced it’d work… First, it seems to use the same compositor for both RGB and depth images, but RGB images are PF_A8R8G8B8, while depth images are PF_FLOAT32_R. I’m not sure what happens if you try to write the float value into an ARGB texture (probably bad things? :slight_smile: ). Second, additive Gaussian noise doesn’t make much sense for depth images… Do you know for sure the noise works in depth images there? I haven’t yet tested ignition gazebo much, so I just don’t know…

What I tested in the opengl branch is actually exactly what’s done for making RGB noise work, just using a custom compositor with PF_FLOAT32_R rendertexture. And that’s what didn’t work and got me just black images. I even tested connecting the custom compositor and using the standard Gazebo/DepthMap material, but that didn’t work either. It seems that as soon as I let the original depth image render to the rendertexture, it gets lost somewhere…

Do I understand it correctly that you say somebody has already tried to alter the shader for depth maps to support noise and didn’t succeed?

As for the CPU: adding the noise in the DepthCamera would IMO break the rendering/sensors decoupling, wouldn’t it? I mean, the rendering namespace doesn’t know about noise… So adding it to the sensor would make more sense. But then there’d be a difference whether you subscribe to depth messages or use ConnectNewDepthFrame… But GPU lidar has apparently the same problem… It seems like the hack I used (misusing the first newDepthFrame callback to alter the const data) is the best way to keep the decoupling between sensors and rendering, but it is an ugly hack…