microsoft / Azure_Kinect_ROS_Driver

A ROS sensor driver for the Azure Kinect Developer Kit.
MIT License
304 stars 226 forks source link

On "color spill", or "flying pixels" #235

Open francescomilano172 opened 2 years ago

francescomilano172 commented 2 years ago

Describe the bug Using the factory calibration, point clouds exhibit “color spill”, or “flying pixels” at the object boundaries. This happens both with the latest version of the ROS driver and with the Azure Kinect Viewer. Can this problem be solved through manual calibration?

To Reproduce

Kinect Azure Viewer

  1. Launch k4aviewer and open the device.
  2. As View Mode select 3D and Color.
  3. Point the camera on any object and observe the color spill at the object boundaries.

/points2 topic from ROS driver

  1. Set the flags point_cloud and rgb_point_cloud to true in kinect_rgbd.launch. Also keep point_cloud_in_depth_frame to false, so as to have depth_to_rgb-like backprojection. This should be the choice that gives less color spill, as mentioned for instance here.
  2. Run kinect_rgbd.launch and look at the /points2 topic in RVIZ.

Expected behavior In the desired setup, I would need to have aligned and rectified RGB and depth frames and use camera intrinsics to retrieve a point cloud. In the point cloud, there should be no color spill.

Screenshots In all the following screenshots, colorspill on the edges of the pink ball can be noticed

Desktop:

Additional context It is unclear to me whether color spill is a problem that can be avoided. Previous conversations seem to hint at the fact that this is an inherent limitation of the Azure Kinect (e.g., the paper mentioned here). On the other hand, some threads seem to suggest that the problem might be related to imperfect calibration/camera alignment (e.g., here and here) and that a custom calibration can yield better alignment between the RGB and IR cameras (e.g., here and here). However, the conversations are overall inconclusive, with mixed opinions (negative 1, negative 2, positive, unclear).

I also saw that there is now the possibility to manually calibrate the intrinsics of the cameras and use them through the ROS interface instead of the factory calibration (here and here). Can a custom calibration alleviate or fix this problem, or is it a hardware limitation?

Also, is this to some extent due to the interpolation that is introduced both when warping the depth image into the RGB frame (e.g., here and here) and when rectifying the images before backprojection (as happens in rgbd_launch/image_geometry/cv2.remap, see e.g., this issue)? In particular, different interpolation schemes produce very different results (see, e.g., here), but even the recommended nearest-neighbor interpolation which is used in rgbd_launch for rectification (here) does not solve the color spilling problem.

ooeygui commented 2 years ago

Thank you for the bug and details.

christian-rauch commented 2 years ago

If k4aviewer already shows this effect on the pointe cloud, then there is nothing that the ROS node can do for the point cloud on /points2.

You can use the original colour and depth image and the intrinsics+extrinsics in two ways:

  1. use the rgbd_launch nodes via kinect_rgbd.launch to do the rectification, registration and projection manually
  2. manually calibrate the camera and try if kinect_rgbd.launch provides better results afterwards

It may be sufficient to only adjust the extrinsic calibration between the colour and depth camera and keep the intrinsics as they are.

francescomilano172 commented 2 years ago

Hi @christian-rauch, thank you for your answer. For number 1., yes, this is what we are already doing, following the procedure in https://github.com/microsoft/Azure_Kinect_ROS_Driver/issues/212. For 2., can you recommend any calibration procedure for the extrinsics between the colour and the IR camera?

christian-rauch commented 2 years ago

Since https://github.com/microsoft/Azure_Kinect_ROS_Driver/pull/200, you can use the camera_calibration package to assign new intrinsic parameters. You can calibrate the RGB and IR cameras separately.

The extrinsic parameters are published as tf. But I don't know a procedure to does that automatically. You may have to figure that out manually by adjusting the extrinsic tf and comparing the quality of the registration.