microsoft / Azure_Kinect_ROS_Driver

A ROS sensor driver for the Azure Kinect Developer Kit.
MIT License
303 stars 222 forks source link

Issue with Camera Extrinsic Calibration #83

Open t-walker-21 opened 5 years ago

t-walker-21 commented 5 years ago

Describe the bug We are attempting to use your Kinect Azures in high precision robotic sensing and manipulation projects. Specifically, we are trying to find rigid spatial transforms from multiple kinect cameras to an arbitrary fixed frame, i.e. extrinsic calibration between camera frame and fixed frame. We use a charUco board in conjunction with openCV to calculate this transform. In order to verify our calibration, we perform an experiment to evaluate the accuracy of the calibration. The steps to this experiments are outlined below:

Using the Azure_Kinect_ROS_Driver, acquire rgb and depth (depth_to_rgd) images of the charUco board as well as the rgb intrinsics and distortion parameters from the /camera_info topic. We collect images @ (4096x3072), and the depth mode is set to WFOV unbinned.

After collecting color and depth images, we use openCV’s aruco module to find markers in the images

corners, ids, rejected_img_points = cv2.aruco.detectMarkers(img, dictionary)

These markers are further refined. corners, ids = cv2.aruco.refineDetectedMarkers(img, board, corners, ids, rejected_img_points)[:2]

CharUco corners are found num_charuco, charuco_corners, charuco_ids = cv2.aruco.interpolateCornersCharuco(corners, ids, img, board)

The pose (transform) of the board in the camera frame is computed. ret, rvec, tvec = cv2.aruco.estimatePoseCharucoBoard(charuco_corners, charuco_ids, board, intrinsics['camera_matrix'], intrinsics['dist_coeffs'])

Using the depth image and intrinsics, project the 2D charUco points into the 3D camera frame.

x = (u - cx) * pt_depth * 1/fx //u is charUco corner x coordinate , cx is x optical center, fx is focal length x
y = (v - cy) * pt_depth * 1/fy //v is charUco corner y coordinate , cy is y optical center, fy is focal length y
z = pt_depth * unit_scaling // unit_scaling converts mm to meters, here it is set to 1

Using the computed transform, transform these points from 3D camera frame to charUco board frame

rotMat = cv2.Rodrigues(rvec)[0]
T = np.zeros((4,4)
T[:3, :3] = rotMat
T[:3, 3] = tvec[:, 0]
T[3,3] = 1
T = np.linalg.inv(T)

Calculate euclidean distances between depth points and ground truth points

boardFramePoints = []
for p in range(len(pts3D)):
boardFramePoints.append(np.dot(T, np.concatenate([pts3D[p], [1]]))[:3])
Given these transformed points (in the charUco board frame) and the ground truth board points, calculate the euclidean distance between these two sets
error = boardFramePoints - obj_points // obj_points are ground truth board points from aruco lib

euc = np.linalg.norm(error,axis=1)  

In these experiments, we’ve seen errors on the order of centimeters; we expect these errors to be a few millimeters. Across five different images, we performed this experiment and averaged the euclidean distances, we obtain an average error of 0.0172954 meters and the standard deviation is 0.00574098015.

What do you think could be causing this error? We’ve observed that the depth measurements are accurate. We’ve used objects of known dimensions to verify depth measurements. This leads us to believe that the intrinsic/distortion calibration parameter could be incorrect. Should there be any conversions of the distortion parameters before their use in aruco? The distortion parameters from ROS seem to adhere to the rational polynomial model. I assume that the aruco functions are expecting plumb bob parameters, but I couldn’t find anything in their documentation. Could you please give some feedback as to why our transform/experiment gives the results that it does?

To Reproduce Steps to reproduce the behavior:

  1. Clone my repo which uses your intrinsics (https://github.com/t-walker-21/extrinsicCalibration/tree/master)
  2. Install the two dependencies in the README
  3. Run the python script in the README
  4. See error

Expected behavior Given the intrinsics and dist coefficents from the ROS node, we expect the depth projection error to be very low (~2-7 mm); however, the error is more like 17 mm

Desktop:

ooeygui commented 4 years ago

This is an amazing write up.

helenol commented 4 years ago

Hi @t-walker-21, just to clarify, are you using the output from TF for the extrinsics, or the 4x4 transformation matrices printed on start-up from the driver? If the first, then I am currently investigating (and fixing...) a slew of issues with the TF output.

helenol commented 4 years ago

At any rate, check out #85 and #86 and see if they perhaps go some way towards resolving your problem. :)

ooeygui commented 4 years ago

Thanks @helenol. @t-walker-21, Could you validate based on Helen's changes?

t-walker-21 commented 4 years ago

Hello @helenol and @ooeygui . Thank you for addressing my issue. I pulled your changes and repeated my experiments and got the same result. The extrinsic calibration I am referring to is the computed transform between my kinect azure and an openCV calibration checkerboard/charuco board. I am not talking about the extrinsic calibration between the color and depth sensors. My issue is that after computing the transform from kinect to calibration board (extrinsic calibration), the points that lie on the board are not aligning well enough with the same points when observed through the Azure. I believe there may exist some systematic error with the Azure. Perhaps there is some bias in the depth measurements or the given intrinsic parameters are incorrect? When I perform my experiment outlined above, the average euclidean error between the ground-truth board points and transformed camera points is excessive (~15 mm). Whereas, when I repeat the same procedure with the Intel Realsense 435D, the error is less than half of the kinect error (~6.5 mm). In my bug description, I have given a link to a small repo where you could replicate my issue. I have included code/pictures so you can observe the error for yourselves. Just snap some RGBD images of the checkerboard/charuco board included and run my code. You should see the same centimeter error that I observe.

t-walker-21 commented 4 years ago

Another calibration routine we tried was using a single aruco tag to find the transform from the kinect to the base frame of a UR5 robot. To achieve this, we assign a known rigid transform from the UR5 end-effector frame to the aruco tag frame (this gives the full base_link - to - aruco transform). Using the fiducials package, we acquire the transform from camera_frame to aruco frame and, consequently, get the camera to base link frame. When visualizing the origin of the aruco frame against the pointcloud in RVIZ, there is a clear offset of what looks to be a centimeter or two.

helenol commented 4 years ago

@t-walker-21 reading your issue more closely now... I think you may have hit the nail on the head there with the calibration issue. The intrinsics that ship with the camera are rational-polynomial, which is supported in OpenCV, but requires passing in 8 parameters to undistort/project/etc. Looking at the code in Aruco, they only support 5 coefficients as far as I can tell. Unfortunately the coefficients aren't a plain Taylor expansion so you can't just cut the last 3 off and call it good. I would recommend recalibrating the camera yourself if you haven't yet.

t-walker-21 commented 4 years ago

Hello @helenol, we have obtained new intrinsics with our charUco boards and these have improved our results when repeating our experiment. Do you or @ooeygui know if the Kinect is meant to work out of the box with default intrinsics? Or is it meant to be calibrated at setup time? Its strange to me that our calibration did better than the factory.

bnjff commented 4 years ago

Hi @helenol, do you calibrate the color camera's intrinsics only or, additionally, the depth's intirinsics, color-to-depth extrinsics, etc.?

amonnphillip commented 4 years ago

Hello, it would be great if we knew that performing our own color camera calibration was enough to solve this issue. I have spent 3+ working days trying to figure out what sounds like this exact problem.

Also are you sure the Aruco lib in opencv only uses 5 distortion coefficients. I took a look at the opencv (version 3+) code and it does look like it uses all the coefficients passed (in the undistort method)..

Please advise.

ooeygui commented 4 years ago

We're currently in discussions about how to proceed. The SDK team offered these docs: https://docs.microsoft.com/en-us/azure/Kinect-dk/use-calibration-functions https://docs.microsoft.com/en-us/azure/Kinect-dk/use-image-transformation

amonnphillip commented 4 years ago

Just to let you guys know, I can confirm after calibrating the color camera myself I get much better results from pose estimation.

I calibrated using OpenCV 3+ using a high quality, custom manufactured Charuco board. I use all 8 distortion parameters when estimating the pose and get the more accurate estimations of translation and rotation.

rajk97 commented 4 years ago

Hi @amonnphillip !

We have been trying to do intrinsic calibration for the Azure Kinect Camera for both color and depth. As I'm a beginner, I followed the calibration technique mentioned in the ros wiki at :

http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration

From what I understand, what it does is that it gets calibration information and then communicates with ROS service /camera/set_camera_info to change the intrinsic parameters of the camera.

I didn't find the camera/set_camera_info service being published by the camera node and hence when I click save after calibrating (following the tutorial), I get the following error:

Traceback (most recent call last):
  File "/opt/ros/melodic/lib/python2.7/dist-packages/camera_calibration/camera_calibrator.py", line 252, in on_mouse
    if self.do_upload():
  File "/opt/ros/melodic/lib/python2.7/dist-packages/camera_calibration/camera_calibrator.py", line 209, in do_upload
    response = self.set_camera_info_service(info)
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 439, in __call__
    return self.call(*args, **kwds)
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 499, in call
    service_uri = self._get_service_uri(request)
  File "/opt/ros/melodic/lib/python2.7/dist-packages/rospy/impl/tcpros_service.py", line 467, in _get_service_uri
    raise ServiceException("service [%s] unavailable"%self.resolved_name)
rospy.service.ServiceException: service [/camera/set_camera_info] unavailable

Is there a way out of it? It seems you were successful in intrinsic calibration. Could you please tell me how you proceeded ahead? Kindly comment.

ROS Version: Melodic Ubuntu 18.04

Ros Services running:

/azure_kinect_ros_driver/get_loggers /azure_kinect_ros_driver/set_logger_level /depth/image_raw/compressed/set_parameters /depth/image_raw/compressedDepth/set_parameters /depth/image_raw/theora/set_parameters /depth_to_rgb/image_raw/compressed/set_parameters /depth_to_rgb/image_raw/compressedDepth/set_parameters /depth_to_rgb/image_raw/theora/set_parameters /ir/image_raw/compressed/set_parameters /ir/image_raw/compressedDepth/set_parameters /ir/image_raw/theora/set_parameters /joint_state_publisher/get_loggers /joint_state_publisher/set_logger_level /rgb/image_raw/compressed/set_parameters /rgb/image_raw/compressedDepth/set_parameters /rgb/image_raw/theora/set_parameters /rgb_to_depth/image_raw/compressed/set_parameters /rgb_to_depth/image_raw/compressedDepth/set_parameters /rgb_to_depth/image_raw/theora/set_parameters /robot_state_publisher/get_loggers /robot_state_publisher/set_logger_level /rosout/get_loggers /rosout/set_logger_level /rostopic_20269_1575590186848/get_loggers /rostopic_20269_1575590186848/set_logger_level

Ros Nodes running:

/azure_kinect_ros_driver /joint_state_publisher /robot_state_publisher /rosout /rostopic_20269_1575590186848

Appreciate your time and help. Thanks!

jessekirbs commented 4 years ago

@amonnphillip Would you mind sharing the software you used to fix the intrinsic calibration with OpenCV 3 and the Charuco board? Thank you!

@t-walker-21 Did you ever end up solving this issue to your satisfaction? I'd love to take a look at your intrinsic calibration method as well if you don't mind. Thanks!

Azure Team - Has there been any progress into diagnosing and solving this issue? I can confirm that the intrinsic calibration is off on my K4A's as well.

ooeygui commented 4 years ago

I've reached out to the Azure Kinect SDK team. I'm not clear where the problem is - is the ROS node interpreting values incorrectly or is it being published incorrectly.

helenol commented 4 years ago

Hi all, To clarify again, there's no issue in (i.e., incorrectness) in the calibration information from the Azure Kinect. It is correct, and using a model supported by OpenCV (8-parameter rational polynomial), but not by ROS and not by many of the packages you're using. Keep in mind that the only model fully-supported by ROS, 4 (or 5?)-parameter radial-tangential, should be ok for the RGB camera, but insufficient for the IR camera in case you're using that (as it's too wide-angle to be properly explained by radtan).

@rajk97 : the azure kinect does NOT have a way to update the flashed calibration to a different model. Once you calibrate the camera, you have to keep track of the camera_info message and re-publish it yourself. @jessekirbs you can check out a bunch of different calibration methods, including opencv: https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html ROS: http://wiki.ros.org/camera_calibration MATLAB & an intro to the subject: https://www.mathworks.com/help/vision/ug/camera-calibration.html https://www.mathworks.com/help/vision/ug/single-camera-calibrator-app.html

And literally any program that works with images will generally let you calibrate them using an appropriate model. :) 4/5 parameter radtan (radial tangential or "plumb bob" as it's known in ROS for whatever reason) is the most commonly used in Computer Vision.

zhaocheng-hub commented 3 years ago

@amonnphillip Hello, can you share your calibration method?

christian-rauch commented 3 years ago

See https://github.com/microsoft/Azure_Kinect_ROS_Driver/pull/200 for calibrating the camera via ROS.