IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.6k stars 4.83k forks source link

[D435] Configuration for 4m accuracy #3139

Closed amaanda closed 5 years ago

amaanda commented 5 years ago

Required Info
Camera Model D435
Firmware Version 05.10.06.00
Operating System & Version Win10
Platform PC
SDK Version 2.17.1
Language python
Segment others

Hello.

I am currently using a D435 for 3D reconstruction of objects. The recordings are done outdoors, with a lot of sunlight. Until now, we've been able to get good results with the Depth Map when filming up to 2.5 meters from the object, but as I try filming from further away, the accuracy quickly goes down. D435 documentation states that its maximum range is approx 10 meters, depending on calibration, scene, and lighting condition. In order to keep using Intel for my application, I would need a better Depth Map within a distance of approx. 4 meters. I wonder if there is any recommendation on how to get the best performance from D435 in this situation.

Thanks in advance.

MartyG-RealSense commented 5 years ago

Although the depth sensing range of the 400 Series is 10 m, the accuracy starts to drift noticably after about 3 m. This is due to an effect called RMS Error. The error over distance is greater on the D435 model than on the D415 model.

image

Intel's excellent illustrated camera tuning guide suggests about RMS error, "On passive textured target, expect ~ 30% better RMS with the laser turned OFF (due to residual laser speckle)".

https://realsense.intel.com/wp-content/uploads/sites/63/BKM-For-Tuning-D435-and-D415-Cameras-Webinar_Rev3.pdf?language=en_US

If the IR Projector component of the camera is being used to project a semi-random dot pattern onto the scene (to add analyzable detail to low-texture surfaces like doors and walls),, the projection can cause speckling noise on the image. This tends to be avoided if using an external LED based projector to add texture to a scene instead of the camera's built-in laser based projector.

You can also read about projectors in the camera tuning guide.

The guide adds that the RealSense Depth Quality Tool can provide metrics that include RMS error. There is a white-paper document that you can read about Intel's recommended depth testing methodology..

https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/RealSense_DepthQualityTesting.pdf

amaanda commented 5 years ago

Thanks. I see. So apparently D415 would perform better for long distance since it presents a lower RMS error. My team chose to buy 435 because we have a moving target. Is there a specific speed of the object from which D415 is not recommended?

I will read the guides carefully. Now, about the IR Projector: I will run some tests with it turned off, but I am not sure it will make a big difference, since we are filming on a really light environment. As far as I understand, these patterns only affects closer and flatter objects, is it correct?

MartyG-RealSense commented 5 years ago

I don't know of a specific speed threshold above which motion artifacts are introduced into the D415's image. It at least seems to be able to handle the speed of normal human motion without problems, as the company Jaunt were using it for 360 degree avatar capture.

https://venturebeat.com/2018/08/30/jaunt-shows-off-new-augmented-reality-360-degree-full-body-selfies/

It is perhaps higher speeds such as use with motor vehicles where the D435 becomes the better camera to use because of its faster global shutter.

The RealSense 400 Series cameras are different from other depth cameras in that they perform even better in strong light. In those environments, exposure can be reduced to around 1 ms, which also reduces motion artifacts. Left-click on the image below to read the details in full size.

image

There is a discussion about external projectors on this link:

https://forums.intel.com/s/question/0D50P0000490XhTSAU/external-led-projector-vs-depth-quality?language=en_US

The bright-environment image in that discussion shows how the dot pattern projects over the scene.

image

RealSenseCustomerSupport commented 5 years ago

Hi @amaanda,

Did you play those depth settings and see any help in your test scene? Can you also show the depth, IR and color images in your test scene?

amaanda commented 5 years ago

Yes, I have tested some different settings. Sorry but I don't think I can share it though, all the test images I have belong to the startup I work at. We are still considering if Intel is what we need for our application. Our target is an animal, which can either pace or run, usually from 2.5 to 4.5 meters away from our camera. This distance is not 100% flexible due to our specific application. I thought about buying a D415 to test as well because of the moving speed issues, but I am not sure anymore.

MartyG-RealSense commented 5 years ago

Your mention of animals pacing / running reminded me of a project that I gave free advice to a year ago in regard to the possibility of using RealSense in their simulation setup. They wanted to do a 3D reconstruction, locate important key points on the animal and automatically measure distances and angles between these points. They also wanted to apply machine learning to the data to analyze the animal motion.

As it was private discussions (though not subject to NDA), I won't give person, animal or company names of course, but I'll try to adapt some of my advice for you.

To cut a long discussion short, the best option seemed to be to use 360 degree motion capture (mocap) on the animal, perhaps using markers to follow specific body points. In the year since I made that suggestion, Intel have done significant development in this field with RealSense, recently culminating in a seminar (available at the YouTube link below) on live motion data capture and processing.

https://www.youtube.com/watch?v=VSHDyUXSNqY

amaanda commented 5 years ago

@MartyG-RealSense thanks for your advice and the seminar link.

Unfortunately, for our application, we are not able to do a 360 mocap, since our goal is to get a product out of this. For now it's not economically viable, so we must use only one camera and try to work our way out. So what I am trying to do is: to combine 2 different things: the depth frames providade by Intel along with a second 3D rec done using the stereo infrared images. We are currently using simply a pair of stereo images from a different company but our biggest challenge its this other camera's robustness.

I'd like to ask if you have any thoughts on which depth camera from the D400 series would be more suitable for our application, considering distance accuracy, depth rec quality and that the animal may or not run when being filmed.

MartyG-RealSense commented 5 years ago

In the advanced avatar tech that I developed myself over 5 years and has just come to maturation, I use a custom system I created called CamAnims that I began building when RealSense first launched in 2014. It takes number-value inputs from sources such as a camera or a traditional controller device and animates the joints of a pre-made body.

The body is customizable, and at a future time I'd like to tie the customizer to camera scans to adjust the body in real-time (it currently adjusts using X-Y controller input values). The inputs also control the pace of the body (walk or run) and hence how the joint movement changes as the pace changes (e.g larger up-down leg strides with the lower legs swinging further back when running, and the arm swing pumping harder). The animation system blends data in real-time, so that it can seamlessly make additive adjustments instead of overwriting the previous animation state, resulting in super-lifelike body movements.

So if you can track the travel speed of a real life animal, that could be translated into the same motion speed for a virtual animal in a virtual simulation environment.

In regard to your question about camera models ... you will get greater depth accuracy over distance (less RMS error) with the D415, though its slower Rolling shutter (compared to the D435's faster Global shutter) means that if you are tracking a very fast animal, you may end up with smears or other artifacts on the image. The D415 can cope with normal human motion. I am not sure what the speed threshold is where it begins to smear.

amaanda commented 5 years ago

@MartyG-RealSense My goal is not to track the travel speed or the joints, but to track the animal and extract information from its 3D reconstruction only.

I see. So apparently the only way to test which camera would be more suitable for our product should be buying and testing a D415 to see if we get any artifacts.

Thanks for sharing your experience.

MartyG-RealSense commented 5 years ago

I went back to the start of your message and noticed you had listed Python as the language you are working with. In the past day, I suggested a reconstruction system for Python called pypoisson to another user.

https://github.com/mmolero/pypoisson

amaanda commented 5 years ago

Will test it. Thanks!

RealSenseCustomerSupport commented 5 years ago

Hi @amaanda,

any good result?

RealSenseCustomerSupport commented 5 years ago

More info about BKM for getting best accuracy out of D435:

  1. Use 848x480 resolution. This gives best results.
  2. Use our temporal filter. Especially if you can use 90fps mode with short exposure, if there is enough light in the scene, you can average multiple frames and still have reasonable speed (note that we recommend our edge-preserving temporal filter). - This can help >2x
  3. When projecting on un-textured scenes, use more dense projection pattern. This can mean you can use 2-3 D435 pointed at same scene. - this can help 3x.
  4. Implement Accuracy linearity fix with A=0.08. - This can help 3x.
  5. Use the edge-preserving spatial post-processing filter. - This can help ~2x

Also, refer to https://realsense.intel.com/which-device-is-right-for-you/ for more data about the shutter (rolling versus global).

agrunnet commented 5 years ago

Please ignore point 4 for now. That is coming soon :)

amaanda commented 5 years ago

Hi. Currently me and my team are testing D415, will give any feedback when it's available :)

dibet commented 5 years ago

Hi , What do you want to say with: Implement Accuracy linearity fix with A=0.08. - This can help 3x.

agrunnet commented 5 years ago

Please ignore that for now. It is a new feature that will be released soon which was included here by mistake. It is a subpixel linearity fix that will remove small ripples in the linear accuracy curve.

ercarpio commented 5 years ago

Hi @agrunnet @RealSenseCustomerSupport ,

Was the subpixel linearity feature released in version 2.23? If so, do we still need to wait for firmware 5.11.9 to be released (as stated in #4100 ) to use it?

Thanks!

agrunnet commented 5 years ago

Yes. We will be publishing a white paper on it very soon.

yuemasy commented 5 years ago

More info about BKM for getting best accuracy out of D435:

Use 848x480 resolution. This gives best results. Use our temporal filter. Especially if you can use 90fps mode with short exposure, if there is enough light in the scene, you can average multiple frames and still have reasonable speed (note that we recommend our edge-preserving temporal filter). - This can help >2x When projecting on un-textured scenes, use more dense projection pattern. This can mean you can use 2-3 D435 pointed at same scene. - this can help 3x. Implement Accuracy linearity fix with A=0.08. - This can help 3x. Use the edge-preserving spatial post-processing filter. - This can help ~2x

Also, refer to https://realsense.intel.com/which-device-is-right-for-you/ for more data about the shutter (rolling versus global). Hi,
@RealSenseCustomerSupport
I am also confused with choosing D415 or D435. Currently to get better depth quality, I used D415 but D435 has wider fov than D415. So I am looking for which is my best choice. I found this info about tuning D435 is very useful and could you please give more info about D415? Thank you.

agrunnet commented 5 years ago

Did you check out the following link? https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance

yuemasy commented 5 years ago

Did you check out the following link? https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance Hi @agrunnet Yes, I have read it and try to change setup. Thank you. @RealSenseCustomerSupport @MartyG-RealSense I have two D435 and one D415 device, I put each camera in front of a wall with a 2.23m distance . I found that the two D435, one display about 2.35m(this one I test with depth quality tool, RMS err=0.06-0.07), another shows 2m. The error is too large. D415 value is correct. How can I solve the problem? Thank you. firmware 5.11.6 SDK 2.17.0 Win 10

karlita101 commented 4 years ago

Although the depth sensing range of the 400 Series is 10 m, the accuracy starts to drift noticably after about 3 m. This is due to an effect called RMS Error. The error over distance is greater on the D435 model than on the D415 model.

image

Intel's excellent illustrated camera tuning guide suggests about RMS error, "On passive textured target, expect ~ 30% better RMS with the laser turned OFF (due to residual laser speckle)".

https://realsense.intel.com/wp-content/uploads/sites/63/BKM-For-Tuning-D435-and-D415-Cameras-Webinar_Rev3.pdf?language=en_US

If the IR Projector component of the camera is being used to project a semi-random dot pattern onto the scene (to add analyzable detail to low-texture surfaces like doors and walls),, the projection can cause speckling noise on the image. This tends to be avoided if using an external LED based projector to add texture to a scene instead of the camera's built-in laser based projector.

You can also read about projectors in the camera tuning guide.

The guide adds that the RealSense Depth Quality Tool can provide metrics that include RMS error. There is a white-paper document that you can read about Intel's recommended depth testing methodology..

https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/RealSense_DepthQualityTesting.pdf

The link to the webinar doesn't seem to work, is there any way to still access it?

Also would it be possible to provide the link to the document from which the depth RMS graph comes from

Thank you

MartyG-RealSense commented 4 years ago

Hi @karlita101 Intel's camera tuning guide is now available as a web-page as well as a PDF. The RMS chart can be found in point 5. Understand theoretical limit of the section of the guide linked to below::

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#section-verify-performance-regularly-on-a-flat-wall-or-target

A PDF version of the guide is here:

https://www.intel.com/content/dam/support/us/en/documents/emerging-technologies/intel-realsense-technology/BKMs_Tuning_RealSense_D4xx_Cam.pdf

RealSense webinar recordings are available from this link:

https://www.intelrealsense.com/webinars-and-events/

The webinar that the guide is related to is likely the one that is second from bottom of the list, Tuning the Intel RealSense D400 cameras for optimal performance.

Alex-Beh commented 3 years ago

Hello, I am facing the similar issue also. The accuracy from /camera/aligned_depth_to_color/image_raw is quite good up to 2.5 metre. But the accuracy start to drop beyond that. I have tried the on-chip calliration and the filters that been mentioned above but the issue still remains there. Looking forwards what can I do to improve the accuracy of depth image of camera D435.

MartyG-RealSense commented 3 years ago

Hi @Alex-Beh This is expected of accuracy for the D435 model at this distance from the camera. Because of a phenomenon called RMS Error, error increases linearly as distance from the camera increases. On the D435 / D435i camera models the drift in accuracy starts to become noticable around the 3 meter distance and beyond.

Disabling the IR Emitter that casts a semi-random dot pattern may reduce RMS error, though this will have a negative effect on the depth image if the scene that the camera is observing is not well lit.

Further information about RMS error is provided in the section of Intel's camera tuning white-paper document linked to below.

https://dev.intelrealsense.com/docs/tuning-depth-cameras-for-best-performance#section-verify-performance-regularly-on-a-flat-wall-or-target

If you have the option to change to a different camera model, the D455 has 2x the accuracy over distance of the D435, meaning that it has the same accuracy at 6 meters that the D435 does at 3 meters. The D455 has a minimum depth sensing distance of 0.4 meters though, compared to 0.1 meters on D435, so whilst it is excellent for long-range depth sensing it may not be a suitable choice if you require very close range depth measurement.