microsoft / Azure-Kinect-Sensor-SDK

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.
https://Azure.com/Kinect
MIT License
1.49k stars 619 forks source link

color image distortion correction request #1509

Closed baihaozi12 closed 3 years ago

baihaozi12 commented 3 years ago

We want use the caputured image to do the stitching tasks but got unexpect result

we want use the captured image to do the stitching(panorama) tasks, but we find the camera distortion affect the stitch image. We use the correction parameter(k1,k2,k3,k4,p1,p2) to do the distortion correction(using opencv framework) but the result we got is unexpected. Is there any ideas to doing this tasks? Or any suggestions?

lintianfang commented 3 years ago

I think I have the same issue as you have

Wzj02200059 commented 3 years ago

I meet the same problem,did u slove it ?baihaozi?

baihaozi12 commented 3 years ago

@Wzj02200059 not yet

diablodale commented 3 years ago

Hi. I am successful using the Kinect SDK and OpenCV to undistort color images. There is example code the SDK to demonstrate how to do it.

What are your "result we got is unexpected"? What is unexpected?

lintianfang commented 3 years ago

Hi. I am successful using the Kinect SDK and OpenCV to undistort color images. There is example code the SDK to demonstrate how to do it.

What are your "result we got is unexpected"? What is unexpected?

Thanks for your reply! If you use the function k4a_transformation_depth_image_to_point_cloud() or k4a_transformation_color_image_to_depth_camera(), the distort demo works well. However, if you want to write a function by yourself, it cannot work correctly. Usually, I create my own functions map_color_to_depth() and map_depth_to_point() using the intrinsic parameter, instead of converting the whole color image to a point cloud. I guess there is a filter in their function, and they did not publish how the filter works.

diablodale commented 3 years ago

Have you consulted https://github.com/microsoft/Azure-Kinect-Sensor-SDK/tree/develop/examples/fastpointcloud? In that example, they show the needed code to transform a depth pixel to a pointcloud point. Perhaps, you can use that general approach, while also adding in the undistort matrix transform for the intrinsics.

lintianfang commented 3 years ago

Thanks so much for your reply! By using k4a_calibration_2d_to_3d(&camera_calibration, &p, depth, K4A_CALIBRATION_TYPE_DEPTH, K4A_CALIBRATION_TYPE_DEPTH, &ray, &valid); , it did get correct result of a point! But I am still curious about, how to get the point by using intrinsic as shown in the following snippet, there should be a filter that is unpublished

            double fx_d = 1.0 / intrinsics->param.fx;
    double fy_d = 1.0 / intrinsics->param.fy;
    double cx_d = intrinsics->param.cx;
    double cy_d = intrinsics->param.cy;*/
    double d = 0.001 * depth;
    //solve the radial and tangential distortion
    double x_distorted = x * (1 + intrinsics->param.k1 * pow(intrinsics->param.metric_radius, 2.0) + intrinsics->param.k2 * pow(intrinsics->param.metric_radius, 4.0) + intrinsics->param.k3 * pow(intrinsics->param.metric_radius, 6.0));
    double y_distorted = y * (1 + intrinsics->param.k4 * pow(intrinsics->param.metric_radius, 2.0) + intrinsics->param.k5 * pow(intrinsics->param.metric_radius, 4.0) + intrinsics->param.k6 * pow(intrinsics->param.metric_radius, 6.0));
    x_distorted = x_distorted + 2.0 * intrinsics->param.p1 * x * y + intrinsics->param.p2 * (pow(intrinsics->param.metric_radius, 2.0) + 2 * pow(x, 2.0));
    y_distorted = y_distorted + 2.0 * intrinsics->param.p2 * x * y + intrinsics->param.p1 * (pow(intrinsics->param.metric_radius, 2.0) + 2 * pow(y, 2.0));
           //output a point
    point_ptr[0] = -1.f * float((x_distorted - cx_d) * d * fx_d);
    point_ptr[1] = float((y_distorted - cy_d) * d * fy_d);
    point_ptr[2] = float(d);
diablodale commented 3 years ago

If k4a_calibration_2d_to_3d() worked for you, you can view all the code for that. It here is in the repo. That function calls several functions, if you follow the code one of the functions it calls is... https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/61951daac782234f4f28322c0904ba1c4702d0ba/src/transformation/intrinsic_transformation.c#L233

I recommend you start at k4a_calibration_2d_to_3d() and follow the code for the specific parameters you use.

lintianfang commented 3 years ago

Thanks very much!!!