Closed pgswag closed 1 year ago
Hi @pgswag The RealSense 400 Series camera range, including D435, uses stereo depth constructed from left and right sensors. The depth pixel value is a measurement from the parallel plane of the imagers and not the absolute range.
The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a point on the Left image and the Right image. The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream.
The RealSense SR300 and SR305 models use coded light technology, which is similar to structured light.
“The depth pixel value is a measurement from the parallel plane of the imagers and not the absolute range.” So that means the depth is “a” as shown in the figure below instead of “b”?
When I hold D435 at an Angle of zero to the wall, is the depth of all points in D435 field of vision theoretically the same? For example, the point in the center of the field of view has the same depth as the vertex of the four corners of the field of view?
In other words, the depth is the projection onto the z axis not the distance from the origin, right?
A diagram at https://github.com/IntelRealSense/librealsense/issues/7279#issuecomment-690188950 provided by a RealSense team member further illustrates the relationship between range and Z-depth.
They add "The content of the depth frame is "Z" values calculated for every pixel in the camera's frustum (or cropped FOV). In the sketch it is clear that while range (or radial distance) may coincide with depth (or "Z"), in 99.99% they will be different".
Thank you for your answer!
The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a point on the Left image and the Right image. The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream.
Based on your response, the principle of depth measurement of the Realsense D435 camera is based on binocular vision, while the Realsense D300 series camera is based on structured light measurement. Is that correct?
Hi @pgswag The RealSense 400 Series camera range, including D435, uses stereo depth constructed from left and right sensors. The depth pixel value is a measurement from the parallel plane of the imagers and not the absolute range.
The left and right imagers capture the scene and send imager data to the depth imaging (vision) processor, which calculates depth values for each pixel in the image by correlating points on the left image to the right image and via the shift between a point on the Left image and the Right image. The depth pixel values are processed to generate a depth frame. Subsequent depth frames create a depth video stream.
The RealSense SR300 and SR305 models use coded light technology, which is similar to structured light.
Based on your response, the principle of depth measurement of the Realsense D435 camera is based on binocular vision, while the Realsense D300 series camera is based on structured light measurement. Is that correct?
Hi @5204338 The RealSense 400 Series cameras are stereo depth (constructed from left and right sensors), and so these cameras are like binoculars. The SR300 camera is Coded Light, which is similar to structured light.
OK,I know the result,Thank you very much.
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
<Is the depth solution of D435 based on binocular parallax principle or structured light scheme?>