OpenKinect / libfreenect2

Open source drivers for the Kinect for Windows v2 device
2.08k stars 751 forks source link

Questions concerning the depth post processing of the original Microsoft Driver #134

Closed DamienLefloch closed 9 years ago

DamienLefloch commented 9 years ago

Hello all,

For my research, I am currently working with the Kinect Time-of-Flight camera and I have a question which I need to find an answer.

I already try to contact Joshua Blake and he suggested me to ask in the github since he was not 100% aware of it..

I would like to know if some of you have more informations about what kind of processing filters are applied in the original driver to improve the raw Depth data. I read in one of the log: "first working version of ir/depth decoding; several post processing steps like depth disambiguation, bilateral filtering, edge-aware filtering, implemented in the official SDK are missing; the implemented CPU decoding runs at 10Hz or less;"

So I guess that a lib contributor writes this and maybe have more information.

Joshua Blake answers me about this log "Oh, those notes were written by one of the other contributors. You could post a question about it addressing who wrote that. Others did a detailed analysis of the Microsoft GPU shader implementation so they might know more."

It would be a great help for me to have those information. If bilateral filter is really applied to smooth the data, and what means exactly an edge-aware filtering (bilateral filtering is uusally known as an edge-aware filtering, or is it just the Mixed Pixel removal filters?). I guess also that the microsoft guys does some multi-path detection and masking but this does not really change the depth quality.

Thanks in advance for your time.

Damien

christiankerl commented 9 years ago

Hi Damien,

the current depth processing code in Cpu/OpenGL/OpenCLDepthPacketProcessor is doing the same things as the shader shipped with the K4W2 Preview SDK (might have changed in the meantime). The bilateral filter is applied to the complex valued images before computing the amplitude/phase(depth). Its only aware of intensity edges in these images. The "edge-aware" filter basically tries to filter the flying pixels at the object boundaries by calculating some statistics in a local neighborhood. Both filters can be disabled in libfreenect2.

DamienLefloch commented 9 years ago

Hello Christian,

Thank you for your fast answer.

Ok that is now clear for me. I was aware thanks to the shader that a joint bilateral filter was done using intensity and that a flying pixel removal was also applied later on on the raw depth. But I thought that additional filters were done later on, on the raw depth itsef.

Maybe this change with the comercial KinectToF, but since I use the prototype, I do not really care.

Thanks again