IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.53k stars 4.81k forks source link

D455 - Dynamic Calibration cannot be finished/Inaccurate deprojection values #10429

Closed powellsky closed 2 years ago

powellsky commented 2 years ago

Required Info
Camera Model D455
Firmware Version 05.12.14.50
Operating System & Version Win 10 / Linux (Ubuntu18)
Platform PC/NVIDIA Jetson
SDK Version 2.48.0
Language python/opencv/unity
Segment AR

Issue Description

Hello, I have an issue with one of my D455 modules. I use 3 cameras to retrieve 3D position of a point in space. All of them use same python script on Jetson XAVIER NX module. I use rs.rs2_deproject_pixel_to_point function to get its 3D position. That data afterwards is being sent to different world coordinate system to visualize object translation. 2 of my cameras are working fine, I do not see any major deprojection or depth error, although one of them started to giving me bad results, with exponentially growing error on object movement. First thing I did was checking all of my cameras with Depth Quality Tool, but I did not see any big difference of RMS error between any of them. Next thing I tried was to run them all through Dynamic Calibrator tool. 2 cameras which are working fine were able to go through the calibration process fairly quickly and without any hassle, but the one which seems to have a defect could not finish the calibration. It is the moment where you have to fit marker (displayed on my mobile) within blue square regions. Every time I am about to finish the calibration process, new blue regions appear in different places. It goes on and on forever making the calibration impossible to finish and raising timeout exception.

I would highly appreciate any feedback. Also, I am wondering if there is a way to detect these kind of anomalies outside of Dynamic Calibrator tool? Thank you in advance.

MartyG-RealSense commented 2 years ago

Hi @powellsky Could you try resetting the affected camera to its factory-new default calibration in the RealSense Viewer using a procedure at https://github.com/IntelRealSense/librealsense/issues/10182#issuecomment-1019854487 to see whether it corrects the problem, please? Thanks!

Instead of using the Dynamic Calibration tool, you can also re-calibrate depth and receive a 'health check score' feedback for the camera's calibration using the On-Chip Calibration tool that is built into the RealSense Viewer. It can be found under the More option of the RealSense Viewer.

image

Full details of the On-Chip calibration tool can be found in Intel's white-paper guide at the link below.

https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras

powellsky commented 2 years ago

Hi @powellsky Could you try resetting the affected camera to its factory-new default calibration in the RealSense Viewer using a procedure at #10182 (comment) to see whether it corrects the problem, please? Thanks!

Instead of using the Dynamic Calibration tool, you can also re-calibrate depth and receive a 'health check score' feedback for the camera's calibration using the On-Chip Calibration tool that is built into the RealSense Viewer. It can be found under the More option of the RealSense Viewer.

image

Full details of the On-Chip calibration tool can be found in Intel's white-paper guide at the link below.

https://dev.intelrealsense.com/docs/self-calibration-for-depth-cameras

Thank you, it worked! I have managed to finish calibration procedure after following your steps. Also, self-calibration routine is exactly what I needed for my program, and On-Chip Calibration seems a better and less tedious alternative to Dynamic Calibrator.

Sorry for duplicating an issue.

MartyG-RealSense commented 2 years ago

That's great to hear, @powellsky - thanks very much for the update! Don't worry about duplication, as the librealsense forums have such a large history that it is natural that an existing solution may be missed.

powellsky commented 2 years ago

@MartyG-RealSense Again thank you for solving the issue regarding calibration. However, I mentioned how originally my issue occurred, which was noticing exponential error on depth value for tracked object. To be sure, I re-calibrated all of my cameras, and suddenly I experienced similar behavior on all of them (it could be complete coincidence to be honest that this happened after calibration, and issue may be lying somewhere else). I had to make sure that RS Viewer gives me same results as the software we wrote and results were identical, plus I did some manual measurements to confirm it. The measurements are:

~57cm to target in reality is ~57cm to target in Software (After 60cm, measurements are getting worse) ~70cm to target in reality is ~72cm to target in Software ~86cm to target in reality is ~91cm to target in Software

I would highly appreciate any help in the matter, as high precision in the range of one meter is crucial for us. If you need anything regarding to the software we wrote, I am happy to provide with more details.

MartyG-RealSense commented 2 years ago

There are a range of lighting and environmental factors in a scene that can have a negative impact on depth measurement accuracy. Would it be possible to provide an RGB image of the scene that the camera is observing in order to identify possible elements that may be causing disruption please?

Having said that, if you are obtaining correct measurement at 60 cm then the issue is likely to be related to the observed object / surface rather than the wider environment and lighting around it. The RGB image will help to identify that.

powellsky commented 2 years ago

There are a range of lighting and environmental factors in a scene that can have a negative impact on depth measurement accuracy. Would it be possible to provide an RGB image of the scene that the camera is observing in order to identify possible elements that may be causing disruption please?

Having said that, if you are obtaining correct measurement at 60 cm then the issue is likely to be related to the observed object / surface rather than the wider environment and lighting around it. The RGB image will help to identify that.

Of course. Here is RBG preview of what camera is seeing rgb

To be more specific, I will also attach preview of how depth and infrared are seen. During our detection, we do not use RGB camera, only Infrared, and the only depth position we measure is center (average position) between IR diodes: preview_2

MartyG-RealSense commented 2 years ago

Thanks very much for the images. It looks as though your D455's IR emitter component is disabled. If the emitter (which is a separate component from the left and right IR sensors) was enabled then there would be a semi-random pattern of dots projected onto the scene and visible on the infrared image, like this:

image

Because the IR emitter is a separate component from the IR imagers, the infrared stream can continue to be used when the IR emitter is inactive.

The dot pattern may be absent in the Viewer if:

(a) the Emitter Enabled drop-down menu in the Viewer's options-side panel is set to 'Off' or 'Auto' instead of 'Laser;

(b) if the Laser Power option is set to '0', which automatically disables the IR emitter;

(c) the IR emitter is set to 'Laser' and the Laser Power is set to its default values but some issue is preventing the pattern from being visible on the IR image.

When the IR emitter is disabled or the value of Laser Power is minimized, the infrared image will darken and the dot pattern projection will not be visible on the infrared image, resulting in an infrared image resembling the one that you provided above.

The amount of detail on the depth image may also significantly reduce and resemble your depth image above. This is because the camera uses the dot pattern projected onto surfaces in the scene as a 'texture source' to analyze the surfaces for depth detail if the surface is a material that has low texture or no texture (such as a desk, door or wall).

If a scene is well lit then the camera can alternatively use ambient light in the scene to analyze surfaces for depth detail instead of using the dot pattern.


I would add that if your goal is to sense the object held by the tripod and your project has the option of changing camera models then the new RealSense D405 camera model would be a good choice. It specializes in accurate close-range depth sensing at an ideal range of 7 cm to 50 cm, enabling the camera to be placed closer to an object / surface than any previous RealSense camera model.

powellsky commented 2 years ago

Thanks very much for the images. It looks as though your D455's IR emitter component is disabled. If the emitter (which is a separate component from the left and right IR sensors) was enabled then there would be a semi-random pattern of dots projected onto the scene and visible on the infrared image, like this:

image

Because the IR emitter is a separate component from the IR imagers, the infrared stream can continue to be used when the IR emitter is inactive.

The dot pattern may be absent in the Viewer if:

(a) the Emitter Enabled drop-down menu in the Viewer's options-side panel is set to 'Off' or 'Auto' instead of 'Laser;

(b) if the Laser Power option is set to '0', which automatically disables the IR emitter;

(c) the IR emitter is set to 'Laser' and the Laser Power is set to its default values but some issue is preventing the pattern from being visible on the IR image.

When the IR emitter is disabled or the value of Laser Power is minimized, the infrared image will darken and the dot pattern projection will not be visible on the infrared image, resulting in an infrared image resembling the one that you provided above.

The amount of detail on the depth image may also significantly reduce and resemble your depth image above. This is because the camera uses the dot pattern projected onto surfaces in the scene as a 'texture source' to analyze the surfaces for depth detail if the surface is a material that has low texture or no texture (such as a desk, door or wall).

If a scene is well lit then the camera can alternatively use ambient light in the scene to analyze surfaces for depth detail instead of using the dot pattern.

I would add that if your goal is to sense the object held by the tripod and your project has the option of changing camera models then the new RealSense D405 camera model would be a good choice. It specializes in accurate close-range depth sensing at an ideal range of 7 cm to 50 cm, enabling the camera to be placed closer to an object / surface than any previous RealSense camera model.

Thank you very much for your insightful reply. We are highly aware of laser dot pattern and its powerful appliance. We did plenty of tests with or without it. In our case it highly improves jitter of depth accuracy, and as you said, because of its illumination and texture; it gives wider spectrum of depth and less 0 depth pixels. Since we are only interested in that one small region and its high accuracy, we saw there is a profile on RS viewer called High Accuracy, which we modified a little for our environments needs. IR emitter in our tests (On/Off) showed that distance inaccuracy is persistent (still few centimeters off), as said before emitter helps with jitteriness a lot, but this is not that big of a problem for us as we have an algorithm to smooth out the result, and main distance error stays the same. We also noticed that IR diodes constellation we use (shown on the second picture in my previous reply) creates itself a recognizable pattern for solving depth in that region, although, correct me if I am wrong please :)

Regarding to your advice about D405, which I appreciate a lot; in our case it is important to have good accuracy in the range of 0.7 - 1 meter. We can deal with millimeters error, but when error goes in the range of couple centimeters, things are getting little out of control for us.

MartyG-RealSense commented 2 years ago

Recognizable patterns do matter with algorithms used by stereo depth cameras such as the 400 Series. In the link below. Intel has an introductory Depth from Stereo tutorial guide that discusses stereo depth principles such as calculating depth by estimating disparities between matching key-points in the left and right images:

https://github.com/IntelRealSense/librealsense/blob/master/doc/depth-from-stereo.md

Some visual patterns detected in a scene by the camera can also have negative implications for depth measurement. If there are horizontal or vertical rows of similar looking 'repetitive patterns' such as a horizontal row of fence posts or tree tops, or a vertical stack of window blind slats, then the camera can become confused and generate depth error or 'ghost noise' areas of depth on the image that do not represent objects in the real world scene.

The Intel guides at the links below provide advice about reducing the impact of repetitive patterns.

https://dev.intelrealsense.com/docs/depth-map-improvements-for-stereo-based-depth-cameras-on-drones#section-v-incorrect-depth-values

https://dev.intelrealsense.com/docs/mitigate-repetitive-pattern-effect-stereo-depth-cameras

powellsky commented 2 years ago

Recognizable patterns do matter with algorithms used by stereo depth cameras such as the 400 Series. In the link below. Intel has an introductory Depth from Stereo tutorial guide that discusses stereo depth principles such as calculating depth by estimating disparities between matching key-points in the left and right images:

https://github.com/IntelRealSense/librealsense/blob/master/doc/depth-from-stereo.md

Some visual patterns detected in a scene by the camera can also have negative implications for depth measurement. If there are horizontal or vertical rows of similar looking 'repetitive patterns' such as a horizontal row of fence posts or tree tops, or a vertical stack of window blind slats, then the camera can become confused and generate depth error or 'ghost noise' areas of depth on the image that do not represent objects in the real world scene.

The Intel guides at the links below provide advice about reducing the impact of repetitive patterns.

https://dev.intelrealsense.com/docs/depth-map-improvements-for-stereo-based-depth-cameras-on-drones#section-v-incorrect-depth-values

https://dev.intelrealsense.com/docs/mitigate-repetitive-pattern-effect-stereo-depth-cameras

I am grateful for your help as I think we finally got to the source of the problem. After reading documents you referenced I did more testing. Because the object we are tracking is quite small, even though we have 4 Infrared diodes attached to it, that is not enough to solve invalid depth pixels. First case is that the little box we are using for detection has plain color on it and no texture. Additionally, IR emitter can miss, or have very few (1 or 2) diodes on it depends on the angle and distance. I did tests with highly detailed object on the similar distance and results proved the theory. In the picture bellow, A references our small object with infrareds, while B is bigger, more detailed cube on the same height and very similar distance. After doing manual measurements, B was correct: A It turns out for me that invalid depth values are always further than they actually are.

MartyG-RealSense commented 2 years ago

A RealSense team member recommends in a case about sensing small objects at https://github.com/IntelRealSense/librealsense/issues/4175#issuecomment-507448389 that an object's size take up at least 9 pixels on the image in order to be trackable. At the distance from the diodes that the camera is positioned at, it is similar to the tracked object in the linked-to case.

image

image

MartyG-RealSense commented 2 years ago

Hi @powellsky Do you require further assistance with this case, please? Thanks!

powellsky commented 2 years ago

Hi @powellsky Do you require further assistance with this case, please? Thanks!

Sorry for late reply. We noticed various depths depending on complexity of background. Therefore, hypothetically, the more complex is our background, the better detection goes. Me and my co-worker tested detection on the same object and distances with same camera settings but with different environments (IR Projector ON). In my case, detection was various depending on where in the room is my object, while my co-worker had no issues. One of the tricks that solved my issue was placing highly textured object behind desired object for detection.

MartyG-RealSense commented 2 years ago

It's great to hear that you found a solution that worked for you!

If you are satisfied with the outcome of this case, please feel free to close it with the Close Issue button under the comment writing box. If you have further questions though then I will be pleased to help. Thanks again!