microsoft / Azure-Kinect-Sensor-SDK

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.
https://Azure.com/Kinect
MIT License
1.5k stars 619 forks source link

Invalidaton of depth pixels in edges #1156

Closed sanmaebal4 closed 3 years ago

sanmaebal4 commented 4 years ago

I am using the Azure Kinect with the NFOV Unbbined depth mode to detect a chess board. I make a capture of the chess board and then this capture is transformed in a point cloud through a programme designed in Visual Studio in C/C++. The point cloud is saved in a PLY file and I can open it in a programme named CloudCompare which allows me to make measures from the point cloud of the chess board. Which I observed (you can see it in the images below) is that in the edges of the chess board, a lot of pixels dissapear. So I made some tests and I could see that the greater the distance is, the greater the loss of pixels appear. The problem started having the chessboard 2 meters of distance from the camera, and for 3, 4 or 5 meters the problem increase. At the same time, I discovered that if there is a wall just behind the chessboard the pixels didn't dissapear. Is like the camera has a filter which invalidate pixels that are in an edge of an object and it has a lot of distance from his background.

What I want to know if it is possible to remove this filter or include something in my Visual Studio programme to don't lose this dissapeared pixels. Below there are some screenshots of my problem:

sanmaebal4 commented 4 years ago

I saw that you put the label "More info needed". Could you tell me which information you need?

qm13 commented 4 years ago

@sanmaebal4 looking at the images we believe that the edge does not have enough SNR to produce reliable depth and have been eliminated by the depth processor.