OpenKinect / libfreenect2

Open source drivers for the Kinect for Windows v2 device
2.07k stars 746 forks source link

libfreenect2 calibration very inaccurate #596

Open ahundt opened 8 years ago

ahundt commented 8 years ago

Overview Description:

I've come to believe that there is likely something very inaccurate about the current calibration routines in libfreenect2 in the released 0.1. When I look at data the color mapping onto a point cloud is way off. When I stand 1.5-2m from the camera with a wall at ~5m, sometimes "inches" of wall color will be placed on 3d cloud points that measure the distance to my arm when I stand in front of it using Protonect. This is easier to visualize when using the libfreenect2pclgrabber application.

After iai_kinect2 calibration this is essentially entirely eliminated. Furthermore, when I run Microsoft's 3D builder application they also do not have these same calibration problems using just the hardware so I think there may be something mistaken about the equations as reverse engineered in some of the large calibration issues created for this project.

It may be wise to have another look at those algorithms to see if they can be fixed, or integrate another calibration process directly into libfreenect2.

Version, Platform, and Hardware Bug Found:

0.1 and 0.1.1 release OS X 10.11 and linux 14.04

Steps to Reproduce:

  1. run protonect or libfreenect2pclgrabber
  2. Place large object at 1.5-2m from kinect, with different colored wall at 3-5m

Actual Results:

Observe wall color on object point cloud points, object color on wall point cloud points.

Expected Results:

Color applied to appropriate cloud points.

Reproducibility:

100% with multiple kinectv2 test devices and test operating systems.

Additional Information:

xlz commented 8 years ago

How inaccurate is very inaccurate? Post some images?

The built-in calibration is surely not as accurate as hand-calibrated. Whether that is acceptable depends on how inaccurate it is.

xlz commented 8 years ago

Insufficient information.

ahundt commented 8 years ago

It was such a problem that we switched back to primesense sensors we have. I'll ask if @cpaxton, who took screenshots, can put some of them up.

cpaxton commented 8 years ago

Here is a video comparison between the Kinectv2 and the Kinectv1: youtube

Here is a video of the depth data we are getting from the Kinect v2: youtube

Note in the first video large objects like the Bosch cases come out just fine. This is only an issue for us because we are attempting to manipulate relatively small objects (~2 cm across) autonomously. I am not entirely convinced this is a calibration issue -- the depth-to-color registration looks fine to me -- but I am open to ideas.

xlz commented 8 years ago

I can't spot what is wrong in the two videos. What is the object in the second video?

cpaxton commented 8 years ago

The objects are all straight magnetic linking blocks as seen here with meshes available here.

The problem is that we are trying to get an accurate pose estimate for relatively small objects, which means that noisy depth data like this is intolerable.You can see the objects are fairly noisy and in some cases badly deformed.

xlz commented 8 years ago

So the problem now is deformed depth from a straight surface? The issue previously reported was mismatch in the registration with color and depth.

I suspect this has to do with surface reflectance, or multipath interference (example: https://github.com/OpenKinect/libfreenect2/issues/319). Wrap the magnetic block with paper and see if that improves it? Otherwise it may be multipath interference.

cpaxton commented 8 years ago

Sorry if there was any confusion. This issue was probably misreported because this is the only problem we are currently having with the Kinect2.

ahundt commented 8 years ago

Also note that posted videos are with iai calibration. Without it we get colors from the background on depths tied to objects.

ahundt commented 8 years ago

Yeah unfortunately those videos are for a separate problem sorry about the confusion. We don't have any videos yet of uncalibrated data. We plan to take uncalibrated video when we have time.

philipNoonan commented 8 years ago

Regarding the reliability of depth data, I have created fusion scans of a face using two different Kinect v2s, one where I have modified the optics for near mode, and one where I have replaced the IR lens with a telephoto lens. After manual camera calibration, I get

ifdef NEARMODE

ir_cameraparams.fx = 364.7546f; ir_cameraparams.fy = 365.5064f; ir_cameraparams.cx = 254.0044f; ir_cameraparams.cy = 200.9755f; ir_cameraparams.k1 = 0.0900f; ir_cameraparams.k2 = -0.2460f; ir_cameraparams.k3 = 0.0566f; ir_cameraparams.p1 = 0.0018f; ir_cameraparams.p2 = 0.0017f;

else

ir_cameraparams.fx = 1610.9208f; ir_cameraparams.fy = 1608.9916f; ir_cameraparams.cx = 214.2099f; ir_cameraparams.cy = 154.1397f; ir_cameraparams.k1 = 0.2806f; ir_cameraparams.k2 = -12.9896f; ir_cameraparams.k3 = 182.3996f; ir_cameraparams.p1 = -0.0128f; ir_cameraparams.p2 = -0.0108f;

endif

And the mean hausdorff distance between NN points of the fusion scans of a face are < 1mm. I cant comment on colour to depth registration, but the z-lookup tables seem to be working well, even with very extreme lenses.

ahundt commented 8 years ago

perhaps https://github.com/OpenKinect/libfreenect2/issues/144 is also relevant?

floe commented 8 years ago

Also note that posted videos are with iai calibration. Without it we get colors from the background on depths tied to objects.

The iai calibration is done separately with a chessboard, correct? Then it will very likely be better than the factory calibration, regardless of which software is used to access the Kinect2. To separate the different issues being discussed here, could you try to take static color-depth registered snapshots of the same scene with

  1. the official SDK
  2. the internal libfreenect2 calibration
  3. your iai-kinect calibration

Then it should be easier to tell which one has the best quality (likely 3.) and if 1. and 2. are any different.

floe commented 7 years ago

For the record, I recently came across the RoomAlive Toolkit by Microsoft Research which is used to calibrate multiple Kinects with respect to each other. It performs its own intrinsic calibration, which may perhaps help understand some of the remaining unclear aspects of the factory calibration. Note that I haven't looked into this in detail recently, so I'm just posting this here for reference. (/cc @christiankerl @wiedemeyer ).

https://github.com/Microsoft/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/Kinect2Calibration.cs https://github.com/Microsoft/RoomAliveToolkit/blob/master/ProCamCalibration/ProCamEnsembleCalibration/CameraMath.cs