microsoft / Azure-Kinect-Sensor-SDK

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.
https://Azure.com/Kinect
MIT License
1.49k stars 618 forks source link

How to get clear checkerboard images from IR camera? #921

Closed jasjuang closed 4 years ago

jasjuang commented 4 years ago

I tried taking snapshots of the checkerboard with the IR camera on the Azure Kinect, and the resulting image has very low intensity like below:

test

I tried lighting the checkerboard with IR floodlight, which has wavelength 830nm (similar to first-gen Xbox 360 Kinect), but the intensity doesn't change, so that leads me to think the IR camera on Azure Kinect is not operating on this wavelength. Is there a spec somewhere for the Azure Kinect that shows what type of IR filter is on the IR camera?

The reason I want to do this is explained in #803.

wes-b commented 4 years ago

850nm is the frequency you are looking for. We have it documented here if you want more info. https://docs.microsoft.com/en-us/azure/Kinect-dk/hardware-specification#depth-camera-supported-operating-modes

jasjuang commented 4 years ago

Thanks for the pointer. I will try to find an 850nm floodlight and see if this solves the issue.

jasjuang commented 4 years ago

@wes-b Another question I have is how are the factory calibration parameters determined in the first place? Is it also through some type of similar checkerboard method? Or is it through a completely different method, for example from the hardware itself?

rajeev-msft commented 4 years ago

The intrinsics and extrinsics for the Azure Kinect devices are calibrated at the factory using fiducials similar to using the checkerboards.

jasjuang commented 4 years ago

@rajeev-msft thanks for the quick answer, then it makes sense for us to re-estimate the calibration parameters for the scenario of multi Azure Kinects because we can apply a global bundle adjustment. Using the factory calibration parameters directly is like calibrating a multi-device system without bundle adjustment, hence the problem in #803. I think the reason we can directly use factory calibration parameters for realsense is that there's no distortion, so 8 fewer parameters to estimate for each device, so the less chance for calibration to be stuck in local minima during the non-linear optimization.

jasjuang commented 4 years ago

@wes-b I just tested lighting the checkerboard with 850nm IR floodlight, and unfortunately for some reason, it still shows the same low-intensity darkness like I originally reported. I can verify our 850nm IR floodlight is working correctly because we verified it by placing a conventional camera without a low pass filter on the side, and we can see the IR floodlight from it. Please advice on how can we light the scene for the IR camera on the Azure Kinect.

@rajeev-msft can you also let me know how the IR camera is being calibrated in the factory?

wes-b commented 4 years ago

Are you using K4A_DEPTH_MODE_PASSIVE_IR? In other modes ambient IR gets cancelled out.

wirthual commented 4 years ago

Hi, I try to do the same with charuco markers. @jasjuang Are you also using openCV? I am setting the Flag CALIB_RATIONAL_MODEL and expect to get 8 parameters, but instead I get 14 where 6 are 0. I am not sure if I can simply ignore these 6 values. Since according to the documentation it should return 8

amonnphillip commented 4 years ago

@wirthual I use the following flags when calibrating:

int calibrationFlags = cv::CALIB_USE_INTRINSIC_GUESS | cv::CALIB_FIX_PRINCIPAL_POINT | cv::CALIB_FIX_K1 | cv::CALIB_FIX_K2 | cv::CALIB_FIX_K3 | cv::CALIB_FIX_K4 | cv::CALIB_FIX_K5 | cv::CALIB_FIX_K6 | cv::CALIB_RATIONAL_MODEL;

I pass in the old (Microsoft factory calibrated) camera matrix and distortion parameters into the calibration function calibrateCameraCharuco. I simply copy the new camera matrix (9 values for the 3x3 matrix) and distortion values (8 values k1 k2 p1 p2 etc) and use these in my processing pipeline. This has the effect of refining the already calibrated values.

Please let us know how this works for you.

jasjuang commented 4 years ago

@wes-b thanks, you are right when I switch it to K4A_DEPTH_MODE_PASSIVE_IR the IR floodlight works. I can get a clear checkerboard image for the IR now.

@wirthual yes you can ignore the last 6 values, the Azure Kinect only have radial distortion (k1~k6) and tangential distortion (p1, p2)

amonnphillip commented 4 years ago

@jasjuang So did calibrating the depth via the IR work for you? Do you get better accuracy?

jasjuang commented 4 years ago

@amonnphillip I tested it with the sample viewer and I am in the middle of making changes to allow switching modes for my recording app now. I will let you know if it works soon.