Closed bmegli closed 4 years ago
From Intel tuning-depth-cameras-for-best-performance
The optimal depth resolution is:
From librealsense#7820
From Intel tuning-depth-cameras-for-best-performance\
The “LEFT IR” camera has the benefit of always 1. Being pixel-perfect aligned, calibrated, and overlapped with the depth map, 2. Perfectly time-synchronized, 3. Requires no additional computational overhead to align color-to-depth, and 4. Gives no additional occlusion artifacts, because it is exactly co-aligned. The main draw-back is that it will normally show the projector pattern if the projector is turned on
So we should expect worse result with RGB sensor aligned to depth (or depth aligned to RGB sensor) compared to texturing depth with IR.
From #14 FOV, for D435
Aligning depth to color will:
Aligning color to depth will:
librealsensee align example may be used to see what happens during alignment (cropping).
From librealsense#3042 YUYV is the native format of RGB sensor data.
Ideally we would want to use YUYV format for RGB sensor to avoid unnecessary conversions.
From libreasense alignment code we see that it is not possible to align to depth with YUYV color format.
From HVE#18 VAAPI supports both YUYV (yuyv422) and RGBA8/BGRA8 (but not RGB8/BGR8) as rgb0/bgr0
The best we can do is:
In theory alignment should be performed for undistorted images.
From librealsense#1430 D400 doesn't use distortion models (all coefficients as 0).
We consider adding coefficient estimation to the RGB calibration to reduce the distortion (by about 1 pixel at extremes), but at the moment projection without coefficients is the most accurate you can do
As far as I know nothing has changed in this area (undistortion code present but coefficents always zeroed).
Functionally it is finished. Readme needs some update before merging.
Ready for merge
Merged.
The last thing to do is to update how-it-works section on wiki.
finished
Continuing #13 RNHVE part of HVS#8
Proof-of-concept already implemented in depth-color branch.