Open lintianfang opened 1 year ago
Dear guys,
I would like to map the color image to the WNOF depth image without the SDK function k4a_calibration_2d_to_3d(), I copied part of the code from https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/61951daac782234f4f28322c0904ba1c4702d0ba/src/transformation/intrinsic_transformation.c#L233
But the code seems only work for NFOV mode, the resolutions are 640x576, 320x288. If I change to WNOF mode, the image becomes square: 1024x1024, 512x512. The distortion cannot be undistorted by Brown Conrady's method.
Can anyone give me a hint?
The issue I proposed two years ago, but is still not fully resolved: https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1509.
Thank you very much!
Dear guys,
I would like to map the color image to the WNOF depth image without the SDK function k4a_calibration_2d_to_3d(), I copied part of the code from https://github.com/microsoft/Azure-Kinect-Sensor-SDK/blob/61951daac782234f4f28322c0904ba1c4702d0ba/src/transformation/intrinsic_transformation.c#L233
But the code seems only work for NFOV mode, the resolutions are 640x576, 320x288. If I change to WNOF mode, the image becomes square: 1024x1024, 512x512. The distortion cannot be undistorted by Brown Conrady's method.
Can anyone give me a hint?
The issue I proposed two years ago, but is still not fully resolved: https://github.com/microsoft/Azure-Kinect-Sensor-SDK/issues/1509.
Thank you very much!