IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.49k stars 4.8k forks source link

Depth modules return wrong depth data #8258

Closed oxidane-lin closed 3 years ago

oxidane-lin commented 3 years ago
Required Info
Camera Model D435i
Firmware Version 05.12.09.00
Operating System & Version Ubuntu 16
Platform NVIDIA Jetson TX2 and PC
SDK Version 2.33.1 and 2.38.1
Language C++
Segment Robot

Issue Description

Hello, engineers from IntelRealSense: I am using a D435i and I met a problem that the depth modules return wrong depth data. Here is the detailed description with rgb and depth images. This problem occurs frequently when the camera is facing the wall with some grid fences. Basically the distance to the wall should be continuous,but as shown in the pics, the depth data are much worse than expected. Lots of wrong depth data appear on the wall. I did some tests to check the reason. First, different camera or sdk version, I changed camera and SDK, it doesn't seem to do any help. Then I changed the resolution of the depth image: 640480 or 848480, comparing pic 1 to 2(or 3 to 4), depth show different results under the two resolutions. This also relates with the distance from the camera to the wall(6m vs 6.5m). A summary is that at a distance of 6m and resolution 640480, or distance 6.5m and resolution 848480, the problem will show up. Pic 5 and 6 show the results when a cloth removed from the wall, more wrong data appeared.

I'm guessing from the tests above that maybe the wall doesn't offer enough texture? BUT it's not a white wall. Or maybe this is a bug? What can I do to avoid this problem? I need the depth data for around 10m, but I don't want wrong data.

1-depth640-6m-covered ↑ ↑ ↑1-depth640-6m-covered

2-depth848-6m-covered ↑ ↑ ↑2-depth848-6m-covered

3-depth848-6 5m-covered ↑ ↑ ↑3-depth848-6.5m-covered

4-depth640-6 5m-covered ↑ ↑ ↑4-depth640-6 5m-covered

5-depth640-6m-uncovered ↑ ↑ ↑5-depth640-6m-uncovered

6-depth848-6 5m-uncovered ↑ ↑ ↑6-depth848-6 5m-uncovered

MartyG-RealSense commented 3 years ago

Hi @oxidane-lin The main factor in inaccuracy of depth results in this case is likely to be the 6 meter distance of the wall from the camera. With the 400 Series cameras, error increases linearly as distance of an observed object / surface from the camera increases. This phenomenon is known as RMS error. The error starts to become noticable beyond 3 meters with the D435 cameras.

The D455 camera model has 2x the accuracy over distance of the D435 models, meaning that D455 has the same accuracy at 6 meters that the D435 models do at 3 meters.

You also note that the depth image of the grid-fenced wall worsens when the cloth is taken away. This leads me to think that aside from distance accuracy issues, there is additional disruption due to a phenomenon where the camera may be confused by repetitive horizontal or vertical patterns (a row of vertical fence posts, a row of simlar tree-tops, or horizontal rows of window blinds). The green-yellow boxes on the depth images that do not match to details on the RGB image would be consistent with phantom detail generated from repetitive patterns.

Your depth images may be better when the cloth is present on the wall because the cloth is partially breaking up the repetitive pattern.

The discussion in the link below looks at this subject and offers links to resources for attempting to reduce the effect.

https://github.com/IntelRealSense/librealsense/issues/6713 https://github.com/IntelRealSense/librealsense/issues/6713#issuecomment-651114720

I would also recommend checking that the Threshold Filter option in the Post-Processing section of the Viewer's stereo module options is not enabled in order to ensure that the depth image is rendering the full distance of detail that it is able to observe instead of being limited to the 4 meter distance that this filter sets by default when enabled. If the camera is able to render a wall 6 meters away from it though then it is probably disabled in your Viewer.

image

In regard to black dots on the floor of the depth image, this may be a phenomenon called laser speckle that results from the laser-based dot pattern projector built into the camera. You could try filling in the dots by enabling the Hole Filling Filter in the Post-Processing filter list, or by using an external LED-based pattern projector instead of the D435i's built-in projector,

oxidane-lin commented 3 years ago

@MartyG-RealSense Thank you for your quick reply. I have read your suggested links and whitepapers and tried to adjust some parameters such as SecondPeakThreshold or TextureCountThresh or use some filters. They don't seem to do enough help. I realized that it is a common phenomenon. The repetitive patterns maybe the main factor. I'm just very curious here why it is related to a specific distance. Shouldn't the patterns be the same whether closer or further? And I do have a D455 but it's borrowed away right now. I will do some tests and try to get better results with D455. This whitepaper shows me a post-process example, but I didn't get satisfying results with its configure JSON file. I haven't try the post-process because extra computation is needed and our TX2 is nearly fully loaded now. I will do some tests on my PC later. Thank you again for your help!

MartyG-RealSense commented 3 years ago

I am admittedly not an expert on the science of repetitive patterns in stereo imaging. Intel's excellent camera tuning guide advises though about combatting repetitive patterns that as well as using Second Peak Threshold, "another mitigation strategy that may help is to tilt the camera a few degrees (e.g., 20-30 deg) from the horizontal".

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#section-avoid-repetitive-structures

Another image-enhancing action that you could try is to attempt to reduce the black empty areas on the ceiling at far distance by maximizing the Laser Power slider under the Controls section of the Stereo Module option.

Increasing the value of Laser Power reduces sparseness on the depth image (less gaps). I tested with my camera in average light level conditions to simulate tall-ceiling warehouse conditions and viewing a wall 6 meters away. In my test under these conditions, maximizing Laser Power from the default of 150 to the maximum of 360 significantly filled in missing ceiling detail at the 5-6 meter range.

A key difference between my test location and yours though is that you seem to have fluorescent ceiling strip lights. Fluorescent lights may introduce noise into images because they contain a heated gas that flickers at frequencies that are hard to see with the human eye. This negative effect may be reduced by using a camera FPS that is close to the operating frequency of the lights. For some lights this may be 30 FPS and for others it may be 60 FPS. Some fluorescent lights in world regions such as Europe may also operate at 50Hz, making 50 FPS camera speed a closer match (though 50 FPS is not a default supported mode on RealSense cameras).

oxidane-lin commented 3 years ago

@MartyG-RealSense Sorry for a late reply during weekends. Indeed tilt the camera for about 10 degrees and then results are much better. I've noticed this before I open this issue as our camera cannot avoid facing straight towards the wall during movement. I must find other ways to filter wrong depth data than finding a suitable angle or position. Maximizing the laser power is effective indoors(may be with white wall) but not effective enough in our condition. I placed the camera where lots of wrong data occurred and maximized the laser power, there are improvements that 60% of wrong depth data vanished except those on the central part. I guess the reason is the sparsity of the IR dots compared to the grid-fence. At a distance of around 6m, there are 2-3 grids between arbitrary 2 dots. As you mentioned before, I think external lights should be involved. As for the light differences, the bright parts you see in the infrared stream are roof windows actually though we do have fluorescent lights of 60HZ. Dark waves due to rolling shutter of D435I will show up in continuously pictures or streams, but that's another problem. I don't think they will affect the depth precision, right? Usually I use a frequency of 30HZ or 15HZ. I increased the frequency to 60HZ and saw no big difference.

MartyG-RealSense commented 3 years ago

If you are not aligning depth to color then I would think that problems with the RGB image with a slow rolling shutter would not affect the accuracy of the depth image, as depth is generated from the infrared frame.

If the effectiveness of the laser power in your case is limited by the maximum power available from the camera's built in projector being insufficient for a space the size of your indoor room, an external pattern projector could offer a higher power output and so a larger range. On the images in your opening message, it can be seen how the strength of the dot pattern on the floor seems to taper off once past the halfway distance from camera to far wall, and not reach as far as the back wall.

External projectors also have the advantage that they can be moved around and shaken without affecting the camera image. Range can also be extended by positioning multiple projectors (e.g putting a second one at the halfway point of the room). The section of Intel's white-paper document about projectors at the link below discusses this subject.

https://dev.intelrealsense.com/docs/projectors#section-4-increasing-range

oxidane-lin commented 3 years ago

@MartyG-RealSense Unluckily I am using depth to color alignment. It seems several factors are affecting the results. I'll try to do some backend filtering later. Thank you for your help and I'll reopen this issue if I have some progress here.

oxidane-lin commented 3 years ago

Hi @MartyG-RealSense I hope you remember this issue. I reviewed our conversation and need a double confirm that our discuss was focused on the yellow and green areas on the wall. They should return depth of 6-7m rather than 2-3m. These data are apparently wrong. They are not in the error category. The depth camera is returning wrong data at certain distances. Shouldn't this be a bug or hardware defect? Shouldn't there be updates to eliminate the wrong data?

MartyG-RealSense commented 3 years ago

Are you referring to the false-data floating blobs on the depth image, please?

image

As mentioned earlier in this discussion, they are characteritic of detection by the camera of repetitive patterns. Aside from the links provided that offer advice about combatting it (repeated below), I do not have further advice that I can offer about reducing repetitive patterns.

https://github.com/IntelRealSense/librealsense/issues/6713#issuecomment-651114720 https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#section-avoid-repetitive-structures

I would not classify the repetitive pattern issue as a bug, but a consequence of the stereo depth algorithm.

oxidane-lin commented 3 years ago

@MartyG-RealSense I am referring to 2 blue rectangle areas in depth image below. I think we're talking about the same problem, right? Just doing double confirm to avoid misunderstanding. 7

MartyG-RealSense commented 3 years ago

Yes, I am referring to those blobs highlighted by the blue rectangles too.

oxidane-lin commented 3 years ago

Thank you @MartyG-RealSense . You've helped a lot. I'll try filtering those wrong data out.

MartyG-RealSense commented 3 years ago

You are very welcome @oxidane-lin - good luck!

Aanalpatel99 commented 2 years ago

Hello everyone, I am facing the same issue of inaccuracies, however my case and application are a bit different. I have a camera at a height of 1250mm and angled at 35 ° downwards the minimum value that I get from the camera is 1278mm, but the values keep changing between 1278 to 1282mm, since the floor is steady I should be able to get a stable value, right? And I am also unable to detect objects that have around 10mm height. Thank you

Aanalpatel99 commented 2 years ago

I am using python for the code also applied post processing filters.

MartyG-RealSense commented 2 years ago

Hi @Aanalpatel99 I would recommend using a maximum camera tilt angle of 30 degrees if possible. Whilst the camera can still operate at a larger angle, the risk of problems with the image may increase as the angle increases further beyond 30 degrees.

Aanalpatel99 commented 2 years ago

Thank you so much @MartyG-RealSense, I cannot change the camera angle because the fov that it makes at this angle is important to me. If you have any other solutions please let me know.

MartyG-RealSense commented 2 years ago

You may get less inaccuracy in depth values if you change the camera configuration preset to High Accuracy to screen out obviously inaccurate depth values from the image. https://github.com/IntelRealSense/librealsense/issues/2577#issuecomment-432137634 provides an example of Python scripting for doing so.

If too much depth detail is stripped out by the High Accuracy setting, try changing the line **if** visulpreset == **"High Accuracy"**: to the Medium Density preset for a better balance between accuracy and a good amount of depth detail on the image:

**if** visulpreset == **"Medium Density"**:


If you are using more than one post-processing filter, Intel recommend that the filter types are applied in a particular order when listed in a script.

https://dev.intelrealsense.com/docs/post-processing-filters#section-using-filters-in-application-code

What filters are you applying and what order are they listed in, please?


If you are not already using 1280x720 depth resolution then setting the depth stream to that when the camera is at 30 degrees may help to compensate for the altered FOV by increasing a little how much of the scene the camera can see.

Aanalpatel99 commented 2 years ago

Hii @MartyG-RealSense, I am using three filters decimation->spatial->temporal. I tried using high accuracy but it didn't help in the fluctuation of the depth value. Thank you

MartyG-RealSense commented 2 years ago

Another way to stabilize fluctuating depth is to reduce the value of the 'alpha' on the temporal filter. This has the effect of slowing the rate at which the depth image updates, and so may not be suitable for an application where the camera is being moved around quickly as the slowdown results in a visible transition between one depth state and another.

https://github.com/IntelRealSense/librealsense/issues/10078#issuecomment-997292916 has a Python script that demonstrates configuring the value of alpha on the temporal filter. '0.1' alpha (instead of its default of 0.4) and the 'delta' left unchanged on its default of '20' should be a good test value.

Aanalpatel99 commented 2 years ago

Thank you so much @MartyG-RealSense for the help, the temporal filter's attributes value change helped a lot.

MartyG-RealSense commented 2 years ago

That's great news, @Aanalpatel99 - thanks for the update!