IntelRealSense / librealsense

Intel® RealSense™ SDK
https://www.intelrealsense.com/
Apache License 2.0
7.62k stars 4.83k forks source link

How the projected dots are processed #13377

Closed JaouadROS closed 1 month ago

JaouadROS commented 1 month ago

I want to learn more about the algorithms and theory behind processing IR images and how the dots are used to get depth information. Can I find that somewhere? I'm not sure if in the SDK source code I can find any and I also checked the documentation which is very good by the way.

MartyG-RealSense commented 1 month ago

Hi @JaouadROS Information about how the dot pattern is analyzed and processed can be found in the section of Intel's RealSense guide to projectors at the link below.

https://dev.intelrealsense.com/docs/projectors#5-the-dot-pattern

Information about Intel's stereo depth algorithm for RealSense 400 Series cameras is limited as the algorithm is closed-source and protected. What is known though is that it uses stereo matching.

Intel have a general guide to stereo matching at the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/depth-from-stereo.md

In regard to RealSense-specific information, pages 18-19 of the current edition of the data sheet document for the RealSense 400 Series cameras provides the following explanation:

https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet


The Intel RealSense D400 series depth camera uses stereo vision to calculate depth. The stereo vision implementation consists of a left imager, right imager, and an optional infrared projector. The infrared projector projects a non-visible static IR pattern to improve depth accuracy in scenes with low texture.

The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a point on the Left image and the Right image.

The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream.

MartyG-RealSense commented 1 month ago

Hi @JaouadROS Do you require further assistance with this case, please? Thanks!

JaouadROS commented 1 month ago

No, thank you very much for your detailed comment.

MartyG-RealSense commented 1 month ago

You are very welcome, @JaouadROS - thanks very much for the update!