microsoft / Azure-Kinect-Sensor-SDK

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.
https://Azure.com/Kinect
MIT License
1.5k stars 620 forks source link

A distortion of depth data when mapping depth to point cloud #1487

Open lintianfang opened 3 years ago

lintianfang commented 3 years ago

I generate a point cloud from raw depth data, but there is a distortion, seems like a pillow-shaped distortion. Due to my program needs to support multiple kinds of rgbd cameras, so I cannot use the function generate_point_cloud(depth_image, xy_table, point_cloud, &point_count);, which will change the function map_depth_to_point() for intel real sense, and Artec scanner. So, I wanna ask if I can map depth to a point cloud by intrinsics.

I already use the undistort example from https://github.com/microsoft/Azure-Kinect-Sensor-SDK/tree/develop/examples/undistort, but it cannot solve the problem.

Here is my code:

bool rgbd_kinect_azure::map_depth_to_point(int x, int y, int depth, float point_ptr) const { double fx_d = 1.0 / intrinsics->param.fx; double fy_d = 1.0 / intrinsics->param.fy; double cx_d = intrinsics->param.cx; double cy_d = intrinsics->param.cy; // set 0.001 for current vr_rgbd double d = 0.001 depth 1.100; //solve the radial and tangential distortion double x_distorted = x (1 + intrinsics->param.k1 pow(intrinsics->param.metric_radius, 2.0) + intrinsics->param.k2 pow(intrinsics->param.metric_radius, 4.0) + intrinsics->param.k3 pow(intrinsics->param.metric_radius, 6.0)); double y_distorted = y (1 + intrinsics->param.k4 pow(intrinsics->param.metric_radius, 2.0) + intrinsics->param.k5 pow(intrinsics->param.metric_radius, 4.0) + intrinsics->param.k6 pow(intrinsics->param.metric_radius, 6.0)); x_distorted = x_distorted + 2.0 intrinsics->param.p1 x y + intrinsics->param.p2 (pow(intrinsics->param.metric_radius, 2.0) + 2 pow(x, 2.0)); y_distorted = y_distorted + 2.0 intrinsics->param.p2 x y + intrinsics->param.p1 (pow(intrinsics->param.metric_radius, 2.0) + 2 pow(y, 2.0)); point_ptr[0] = -1.f float((x_distorted - cx_d) d fx_d); point_ptr[1] = float((y_distorted - cy_d) d fy_d); point_ptr[2] = float(d); return true; }

Thanks very much for your consideration!

lintianfang commented 3 years ago

For understanding easily, I would like to give more details as follows: win 10 10.0.18363 + VS2019 + 64-bit + SDK 1.4.1 in the following figure, it is a scan of the wall of my office, here should be a rectangle wall as the red arrows' direction, but here are distortion at two ends of this scan as the blue arrows show, if I use generate_point_cloud(), it would be correct, but if I use my own map_depth_to_point(), the distortion happens. I wonder what kind of filter or mapping algorithm is applied in generate_point_cloud(). image

AlbertoMQ commented 1 year ago

I also have this issue when getting pointcloud using intrinsics.

lintianfang commented 1 year ago

I also have this issue when getting pointcloud using intrinsics.

Do you solve it?