Open matlabbe opened 1 year ago
Hi @matlabbe , Thank you for detailed report on this issue. We have a few approaches that we will be testing in the next few weeks to mitigate this issue. What we suggest, and have worked for a few companies, is to crop the right-most band of the depth image which produces random/noisy depth pixels. You can crop on-device as well using ImageManip node. Thoughts? Thanks, Erik
I tried with the Mesh approach with those changes (explained in my previous post):
diff --git a/depthai_helpers/calibration_utils.py b/depthai_helpers/calibration_utils.py
index ff9a776..b0f66c1 100644
--- a/depthai_helpers/calibration_utils.py
+++ b/depthai_helpers/calibration_utils.py
@@ -394,7 +394,7 @@ class StereoCalibration(object):
self.d1,
self.M2,
self.d2,
- self.img_shape, self.R, self.T)
+ self.img_shape, self.R, self.T, alpha=0)
else:
self.R1, self.R2, self.P1, self.P2, self.Q = cv2.fisheye.stereoRectify(self.M1,
self.d1,
@@ -959,8 +959,8 @@ class StereoCalibration(object):
print("Mesh path")
print(curr_path)
- map_x_l, map_y_l = cv2.initUndistortRectifyMap(self.M1, self.d1, self.R1, self.M2, self.img_shape, cv2.CV_32FC1)
- map_x_r, map_y_r = cv2.initUndistortRectifyMap(self.M2, self.d2, self.R2, self.M2, self.img_shape, cv2.CV_32FC1)
+ map_x_l, map_y_l = cv2.initUndistortRectifyMap(self.M1, self.d1, self.R1, self.P1, self.img_shape, cv2.CV_32FC1)
+ map_x_r, map_y_r = cv2.initUndistortRectifyMap(self.M2, self.d2, self.R2, self.P2, self.img_shape, cv2.CV_32FC1)
map_x_l_fp32 = map_x_l.astype(np.float32)
map_y_l_fp32 = map_y_l.astype(np.float32)
and it does fix the black border problem.
Rectified images with original version:
Rectified images with modified version:
However, the camera_info
published for left and right images in ROS are wrong, the correct rectified approach will still use a copy of M2 in P1 and P2. With alpha=-1
(default), M2 is pretty close to P2, but with alpha=0
(to remove black borders), P2 is much different than M2 because we "zoomed" in the left and right images to remove the black borders. Here are the wrong camera_info:
#Left:
D: [-6.0713887214660645, 54.4517707824707, -0.0003773995558731258, -0.00036596463178284466, 43.84017562866211, -6.166638374328613, 54.441646575927734, 43.68896484375]
K: [805.8760986328125, 0.0, 627.5831298828125, 0.0, 805.9297485351562, 411.1523742675781, 0.0, 0.0, 1.0]
R: [0.9999866485595703, 0.003882801625877619, -0.0034086634404957294, -0.0038728320505470037, 0.9999881982803345, 0.0029264981858432293, 0.0034199864603579044, -0.0029132580384612083, 0.999989926815033]
P: [802.166015625, 0.0, 648.2816772460938, 60.32984576865099, 0.0, 802.0962524414062, 409.9562683105469, 0.0, 0.0, 0.0, 1.0, 0.0]
#Right:
D: [6.126948833465576, -53.37107849121094, 0.0006922174943611026, -0.0006033015670254827, 565.4175415039062, 5.966142654418945, -53.328697204589844, 564.033203125]
K: [802.166015625, 0.0, 648.2816772460938, 0.0, 802.0962524414062, 409.9562683105469, 0.0, 0.0, 1.0]
R: [0.9999366998672485, -0.004832806997001171, -0.010160881094634533, 0.004803117364645004, 0.9999841451644897, -0.002944336738437414, 0.010174949653446674, 0.002895346377044916, 0.9999440312385559]
P: [802.166015625, 0.0, 648.2816772460938, 0.0, 0.0, 802.0962524414062, 409.9562683105469, 0.0, 0.0, 0.0, 1.0, 0.0]
Here what should camera_info
look like to match the rectified images without black borders (note P1 and P2 are not equal to M1 and M2 (K1 and K2)):
#Left:
D: [-6.0713887214660645, 54.4517707824707, -0.0003773995558731258, -0.00036596463178284466, 43.84017562866211, -6.166638374328613, 54.441646575927734, 43.68896484375]
K: [805.8760986328125, 0.0, 627.5831298828125, 0.0, 805.9297485351562, 411.1523742675781, 0.0, 0.0, 1.0]
R: [0.9999866485595703, 0.003882801625877619, -0.0034086634404957294, -0.0038728320505470037, 0.9999881982803345, 0.0029264981858432293, 0.0034199864603579044, -0.0029132580384612083, 0.999989926815033]
P: [827.490288, 0.0, 646.020584, 62.23527591, 0.0, 827.490288, 410.763664, 0.0, 0.0, 0.0, 1.0, 0.0]
#Right:
D: [6.126948833465576, -53.37107849121094, 0.0006922174943611026, -0.0006033015670254827, 565.4175415039062, 5.966142654418945, -53.328697204589844, 564.033203125]
K: [802.166015625, 0.0, 648.2816772460938, 0.0, 802.0962524414062, 409.9562683105469, 0.0, 0.0, 1.0]
R: [0.9999366998672485, -0.004832806997001171, -0.010160881094634533, 0.004803117364645004, 0.9999841451644897, -0.002944336738437414, 0.010174949653446674, 0.002895346377044916, 0.9999440312385559]
P: [827.490288, 0.0, 646.020584, 0.0, 0.0, 827.490288, 410.763664, 0.0, 0.0, 0.0, 1.0, 0.0]
As a suggestion, without saving explicitly P1 and P2 on the camera, you could regenerate them by calling cv2.stereoRectify
on the camera (I think you are exporting enough info in the JSON for that), while adding in StereoDepth API a new function called "setRectificationAlpha()" that would set the alpha
value in stereoRectifiy
(e.g. alpha=0
to remove black borders). Create the mesh onboard with new P1 and P2. For ROS, send back the new computed P1 and P2 in camera_info
so that nodes (e.g. VSLAM, Visual stereo odometry) received the rectified images can have the right focal distance of the zoomed stereo images without black borders.
EDIT For us, it was more about the wrong disparity on the side that was causing some obstacles to disappear (ray tracing with very far depth). For now, we will just ignore X pixels from each side of the depth image. We will still look forward for your refactoring of the calibration tool.
New update here that could fix this issue: https://github.com/luxonis/depthai-core/pull/718
EDIT: Note that setting alpha to 0 or 1, the camera_info
would have to be updated (focal length is modified) on ROS side, would have to verify if it is the case.
Hello,
we are encountering a problem with the resulting depth image created from Left-Right IR cameras on OAK-D. Here is the issue (showing right image with corresponding depth overlay):
We calibrated the camera with:
The output of stereoRectify is: https://github.com/luxonis/depthai/blob/2b22e4afb94bfff6c60ddbeceaa0e20cbf170490/depthai_helpers/calibration_utils.py#L392
When launching with ROS, the camera_info looks like this:
Not sure why P is not the same than P1 and P2 !? It is like using M2 to regenerate P1 and P2 (which seems wrong). The baseline Tx (or P(0,3)) of left camera_info is also wrong, minor the cm scale, it should be
60.34308056
, not60.401063102285846
.To try to remove the black border, we wanted to use the alpha parameter of stereoRectify() like this:
While M1,M2,R1 and R2 stay the same, P1 and P2 are now scaled:
This works during the calibration, we don't see any black borders in the preview. The function used to show the preview is: https://github.com/luxonis/depthai/blob/10701256518464ee02d9832205f323aa56e15a74/depthai_helpers/calibration_utils.py#L673-L679
This function requires P1 and P2, which are different than M1 and M2. Because P1 and P2 seem not saved in EEPROM, the resulting rectification with depthai_example ros package is exactly the same than before, with the black border artifacts (causing wrong disparity) and with also same camera_info.
Questions:
As a final note, we tried publishing the raw left and right images from depthai_example, then use http://wiki.ros.org/camera_calibration/Tutorials/StereoCalibration to recalibrate ourself the camera, then with stereo_image_proc and the new camera_info messages, we get the correctly rectified images without black borders and thus fixing the disparity issue. However, we prefer computing the disparity on the camera than on the host computer. Hopefully we can find a solution to get good rectification on the camera.