Closed opalmu closed 5 years ago
The K4A depth camera cannot always provide a depth value for every pixel. Especially for pixels representing scene points at a large distance or having low reflectivity in IR, the returned signal is too low to compute depth reliably. In such cases, we report a depth value of 0 indicating that the depth reading of the pixel is invalid.
We internally apply some processing steps in the depth engine to reduce the number of invalid pixels. However, we do not provide depth inpainting algorithms, i.e., algorithms that fill in invalid regions to provide a depth reading at every pixel. The reason is that such algorithms may be unreliable. If you want to reduce the number of invalid pixels, you might want to change the depth mode (if this is possible for your application). The narrow FoV modes will give you fewer invalid pixels than the wide FoV modes. Furthermore, the binned modes will give you fewer invalidations than the unbinned ones.
For clarity, this issue is not related to the function k4a_transformation_depth_image_to_color_camera(). Unfortunately, the interpolation methods in k4a_transformation_depth_image_to_color_camera_custom() will not help to solve this problem either.
@mbleyer Thanks for the clarification! I have tried the inpaint function built in opencv. And as mbleyer has mentioned the inpainting algorithm is not reliable. In my project actually I only need the depth values of a few points of interest. Therefore I just use the nearest neighbor to interpolate the missing depth values.
In the hope of helping others, here I also report what I have tested with the cv::inpaint
function trying to interpolate the missing depth values, the results are appended below:
Original depth image:
Transform depth image to color camera:
Generate the mask of the transformed image:
Apply cv::inpaint function on depth images:
From the images above, the depth values from inpainted image have some slight vibration based on the position. Also it seems that it becomes harder to tell the difference of the depth values as points get closer to each other.
@opalmu Thank you for the sharing and exercise for depth inpaint. The information you provided in this issue surely will help others too. Let us know if you have any other questions, if not, please close this issue :)
@opalmu I am using your listing from above and I get the results below (with strips in transformed depth image). Does someone know of that problem, may be it is related to my sensor? Could somebody help with this problem?
Operating system: Windows 10 64-bit
Azure-Kinect-Sensor-SDK: latest build from source
Compiler version (if built from source): Visual Studio 2019
Firmware: Loading firmware package AzureKinectDK_Fw_1.6.108079014.bin.
File size: 1294306 bytes
This package contains:
RGB camera firmware: 1.6.108
Depth camera firmware: 1.6.79
Depth config files: 6109.7 5006.27
Audio firmware: 1.6.14
Build Config: Production
Certificate Type: Microsoft
Signature Type: Microsoft
@GHSch your striping issue may be related to #840 #294, hope it helps
Thank you a lot, that solved the problem.
Describe the bug I am trying to get the depth value of the points collected from the color image. After I transformed depth image to color camera, often the depth value of the point of interest is shown to be 0 and the point itself is not detected by Kinect Azure. Another answer #588 mentioned that there is some interpolation/filter method to compensate this issue. I am wondering how could I use those methods in the code?
First edit: I have found a promising function from SDK doc called
k4a_transformation_depth_image_to_color_camera_custom
which has an input argument whose type named ask4a_transformation_interpolation_type_t
. This looks very promising and could anyone provide me an example about how to use this? Much appreciated!Second Edit:
588 mentioned that there is no post filter processing method. Is there any other way that I could filter the noise?
To Reproduce
Expected behavior Get the true depth value of the specific point extracted from 2D color image.
Code appendix To help reproduce the error, I also append my code here:
The code here should generate 3 windows that display color image, depth image and the transformed depth image. Finally it will print out the depth value of the center point.
Screenshots Here I append the depth image and the transformed image that I have collected. Original depth image: Transformed depth image:
Desktop (please complete the following information):