IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.61k stars 4.83k forks source link

In the depth map of the box obtained by my Intelrealsense D415camera, the pixels are constantly beating and the bottom color is different, and there are many black holes in the wall of the box. How can I solve these problems? #8799

Closed zhaihuiying closed 3 years ago

zhaihuiying commented 3 years ago

* Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

* Consider checking out SDK [examples](https://github.com/IntelRealSense/librealsense/tree/master/examples#sample-code-for-intel-realsense-cameras).
* Have you looked in our [documentations](https://github.com/IntelRealSense/librealsense/tree/master/doc#useful-links)?
* Is you question a [frequently asked one](https://github.com/IntelRealSense/librealsense/wiki/Troubleshooting-Q%26A)?
* Try [searching our GitHub Issues](https://github.com/IntelRealSense/librealsense/issues?utf8=%E2%9C%93&q=is%3Aissue) (open and closed) for a similar issue.

Required Info
Camera Model { R200 / F200 / SR300 / ZR300 / D400 }
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version {Win (8.1/10) / Linux (Ubuntu 14/16/17) / MacOS
Kernel Version (Linux Only) (e.g. 4.14.13)
Platform PC/Raspberry Pi/ NVIDIA Jetson / etc..
SDK Version { legacy / 2.<?>.<?> }
Language {C/C#/labview/nodejs/opencv/pcl/python/unity }
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

<Describe your issue / question / feature request / etc..>

问题深度图 问题深度图4 问题深度图2 问题深度图3
MartyG-RealSense commented 3 years ago

Hi @zhaihuiying A lot of the areas of the scene are not ideal for depth-sensing because they are black or dark grey. It is a general physics principle (not specific to RealSense) that black surfaces absorb light. The darker the shade, the less light that is returned from the surfaces. This makes dark grey and black surfaces difficult for depth cameras such as RealSense to read depth detail from.

You can bring out more depth detail from such surfaces by projecting a strong light source onto them. The image of a black office chair below provides an example of this, where blue depth is rendered in the areas of the back of the chair and arm-rests where a light-source is projecting onto them and the rest of the chair is plain black (empty of depth detail).

image

You may be able to fill in some of the gaps on the image by applying a Hole-Filling post-processing filter. You can test this in the RealSense Viewer tool by expanding open the Post-Processing options of its side-panel and enabling the Hole-Filling filter.

image

The effectiveness of the filter in closing holes is demonstrated in the before and after image below.

image

If the image is fluctuating strongly then you could try going to the Temporal Filter in the same list of filters in the Viewer and setting Filter Smooth Alpha to a very low value and Filter Smooth Delta to maximum to reduce fluctuation significantly. These settings are best suited to capturing depth when the camera or the scene being observed is static though, as it takes significantly longer for the image to update when Filter Smooth Alpha is minimal.

image

zhaihuiying commented 3 years ago

I will adjust the viewer according to your suggestions to see if the results are better

zhaihuiying commented 3 years ago

As a matter of fact, I am preparing to shoot the mouse in the box with the depth camera. The background of the depth map generated in the shooting process is constantly beating. There are many holes in the cylinder wall of the box and it is difficult to see the shape of the mouse clearly.Because the IntelRealSense 415 has too many parameters, I don't know what to do to get a better depth map.

zhaihuiying commented 3 years ago

0

zhaihuiying commented 3 years ago

I tried to close the hole in the viewer and set the time filter, but the resulting depth map produced residual images

问题深度图5
MartyG-RealSense commented 3 years ago

In the link below, another RealSense user with a D415 and a mouse in a container asked the same question about how to improve the image capture and was provided with detailed advice by RealSense team members.

https://github.com/IntelRealSense/librealsense/issues/1171

zhaihuiying commented 3 years ago

I have read this link before asking this question and modified the parameters according to the suggestions in this link, but the image quality is just like the rat picture above. Is there any appropriate parameter modification Settings recommended to me now。

zhaihuiying commented 3 years ago

In addition, I would like to ask what is the cause of the residual shadow in my depth map when I use the Hole FillingFilter

zhaihuiying commented 3 years ago

In addition, I would like to ask what is the cause of the residual shadow in my depth map when I use the Hole FillingFilter

MartyG-RealSense commented 3 years ago

I guess that the container that the mouse is in is the same one by your foot that was shown empty at the start of this case.

image

Using the Temporal Filter changes would not be suitable for this particular application with a moving mouse as the image under these temporal settings cannot update quickly and results in a lot of after-images. You could leave the delta slider on maximum but move the alpha slider to the middle. This will likely result in the return of fluctuations in the depth image though.

I wonder if the shadow outside of the bucket is the black section of one of your shoes.

image

zhaihuiying commented 3 years ago

The shaded part of the bucket is the reflective part of the floor

MartyG-RealSense commented 3 years ago

If the shadow corresponds to a floor reflection then the simplest solution may be to put a non-reflective surface under the box, such as a large sheet of dark card or even your coat.

Reflections can also be dampened by applying a fine spray-on powder such as foot powder or baby powder to the reflective area, or a professional 3D scanning aerosol spray, but the owner of the floor may not appreciate that if you are not its owner!

Glare from reflections on the image can also be significantly reduced by purchasing a physical optical filter product called a linear polarization filter and applying it over the lenses on the outside of the camera. A coat may be cheaper for camera testing purposes though.

zhaihuiying commented 3 years ago

Thanks@MartyG-RealSense . I adjusted the Delta slider and Alpha slider according to your suggestion, and the residual error disappeared,I will try to photograph the mice again to see the imaging situation.

zhaihuiying commented 3 years ago

I have turned on the hole filling filter and temporal filter as you suggested, and adjusted the Delta slider and Alpha slider. The collected image is like this, and the depth map and RGB map are out of sync. wentitu1 wentitu2

zhaihuiying commented 3 years ago

I found that even though hole filling filter and temporal filter were applied when recording.bag, when reopening recorded.bag, the recording result was not the result of using these two filters, which I believe you can see clearly in the above two pictures.

MartyG-RealSense commented 3 years ago

Post-processing and alignment information is not saved into a bag file. The individual streams are recorded and when the bag file is loaded into memory, actions such as post-processing and alignment can be applied to the bag data in real-time.

The 2D mode of the RealSense Viewer does not have a depth-color alignment function. The above images seem to be from the Viewer's 2D mode, which would explain why they would appear to be unaligned - because they are unaligned.

You can though map RGB to depth in the Viewer's 3D point cloud mode though, which is accessible by left-clicking on the 3D option in the top corner of the Viewer window.

image

In 3D mode, if you activate the depth stream first and then enable the RGB stream secondly, the color should automatically be mapped onto the depth.

You cannot record to a bag file in 3D mode though. Instead, the 3D point cloud data is saved to a ply point cloud file format.

zhaihuiying commented 3 years ago

I also want to ask why the depth map at the bottom of the box is producing these red spots, and how the pulsating background prevents me from getting the image processing I want when working on the depth map. 问题深度图6

zhaihuiying commented 3 years ago

https://user-images.githubusercontent.com/70570353/114990995-ae045000-9ecb-11eb-986a-29067c345e1c.mp4

MartyG-RealSense commented 3 years ago

Ideally, the depth at near-range should be consistently colored blue. As the area of red that you highlighted is next to the blue area and is the same height throughout that area, this indicates that the camera is having difficulty reading depth detail from the surfaces of that area of the observed scene

Looking at your earlier RGB image of the square cardboard box, it looks as though the blue areas at the sides of the image that you marked in the comment above may be the light colored fold-over lids of the box and the red area is the dark inside vertical wall of the box.

You could therefore try projecting more light into the box to illuminate the dark inside walls of the box and see whether it results in more accurate depth.

image

In the kindly provided video of the pulsating depth, it is more stable in the blue center area that has the correct depth. The rest of the image (the red area) may also stabilize if those dark parts of the surfaces can be accurately analyzed for depth detail once greater illumination is provided.

zhaihuiying commented 3 years ago

My real collection environment is like this, using a square box, so the result is because my shooting environment is too dark. If I want to make the image more stable, I need to provide a bright lighting environment manually.

问题深度图7
MartyG-RealSense commented 3 years ago

The scene would certainly benefit from additional illumination. You could first try increasing the Laser Power setting from its default of '150' to its maximum value of '360' if you have not done so already though. This should make the infrared dot projection more visible on a black backdrop and so make it easier for the camera to read depth detail from the dots (which the camera can use as a 'texture source' in scenes where there is little or no analyzable texture on surfaces).

The link below demonstrates dot visibility on a black surface at Laser Power of '150' (upper image) and '360' (lower image).

image

MartyG-RealSense commented 3 years ago

Hi @zhaihuiying Do you require further assistance with this case, please? Thanks!

MartyG-RealSense commented 3 years ago

Case closed due to no further comments received.