microsoft / Azure_Kinect_ROS_Driver

A ROS sensor driver for the Azure Kinect Developer Kit.
MIT License
304 stars 226 forks source link

driver publishes unrectified images by default #199

Closed christian-rauch closed 3 years ago

christian-rauch commented 3 years ago

Describe the bug The intrinsic calibration that is used to project colour and depth pixels to a 3D point cloud is wrong or not properly processed. According to https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/726 and https://github.com/microsoft/Azure_Kinect_ROS_Driver/issues/105 it is neither possible to update the intrinsic parameters on the camera, nor does the ROS node support calibration.

Edit: The driver.launch indeed publishes the correct intrinsics together with the raw unrectified images. These images have to be rectified manually. However, the driver does not provide currently a way to update these intrinsics.

To Reproduce Steps to reproduce the behavior:

  1. set up a scene with AprilTags
  2. visualise the point cloud as well as the estimated tag frames in RViz
  3. see that frames do not align with depth

Expected behavior The intrinsic calibration should provide proper 3D point clouds.

Screenshots

The following images show a point cloud together with the coordinate frames of two AprilTags in RViz. One AprilTag is flat on the table and another one is attached to the side of box and thus orthogonal to the table.

With the Kinect factory calibration, those points and coordinate frame do not align: atag_kinect1 atag_kinect2

As a reference, here is the data from an Asus Xtion Pro Live using the openni2_camera package and manual intrinsic calibration. As you can see, these frames perfectly align with the point cloud. atag_openni1 atag_openni2

Desktop (please complete the following information):

ooeygui commented 3 years ago

Thank you for the detailed writeup. This thread (https://github.com/microsoft/Azure_Kinect_ROS_Driver/issues/83) has a discussion about calibration. Color and depth are handled separately, each with their own calibration. Since most downstream tooling doesn't seem to support the required calibration, we've discussed predistorting to align. However, this would introduce performance issues. We have not reevaluated calibration with the switch to ROS2.

christian-rauch commented 3 years ago

I am specifically talking about the intrinsic calibration of the camera (a.k.a. the camera matrix K or C) here, not the extrinsic calibration between two Kinects or between the colour and depth sensor. The colour <-> depth alignment seems to be ok as I don't see any issues with mismatching colour/depth values on edges etc. But the intrinsic calibration for projecting a point from 2D to 3D and vice versa seems to be off.

From the images above, it is clear that at least one of the cameras (colour or depth) is using the wrong intrinsic parameters. But I cannot know if I can trust the AprilTag pose (i.e. the colour intrinsics are correct) or the depth (i.e. the depth intrinsics are correct) or neither of those (as I don't have a third reference).

jmachowinski commented 3 years ago

Hi @christian-rauch how are you ? Hope fine ;-)

Did you find out more details, what is off ? Or asked the other way around, did your test show anything weird with the depth intrinsic ?

christian-rauch commented 3 years ago

@jmachowinski Nice to see you :-) It's a small (robotics) world.

tl;dr: The factory calibration is actually correctly used by ROS. I just thought that the driver already publishes rectified images.

When I created this issue, I thought that the driver is already publishing rectified images using the factory calibration. But I was wrong about this: The image_raw topics indeed are the raw unrectified images and you have to use the image_procnodelets to rectify the RGB and IR images or do the projections yourself.

I created https://github.com/microsoft/Azure_Kinect_ROS_Driver/pull/200 to enable a manual calibration similar to the OpenNI drivers. In https://github.com/microsoft/Azure_Kinect_ROS_Driver/pull/200#issuecomment-883406304 I compared the raw and rectified images (factory and manual). As you can see from the lights in the ceiling, the factory calibration already provides very good rectification, although with a black border. The manual calibration keeps the image boundaries, but then does not achieve the same level of rectification. But maybe this can also be fixed with a much larger calibration board.

I am using the manual calibration because we are processing the colour images directly and don't want to deal with cropping out the black border. If you use the colour image as part of a 3D-only pipeline, then using the factory calibration is better.

jmachowinski commented 3 years ago

Thanks for the update.

So the title of the issue is currently a bit misleading. The intrinsics are correct, but there is no option, to give an own calibration (if you want that for reasons...).

christian-rauch commented 3 years ago

So the title of the issue is currently a bit misleading. The intrinsics are correct, but there is no option, to give an own calibration (if you want that for reasons...).

Right, I did not update the description after opening PR #200. I updated the title just now and added a clarification to the issue description.