microsoft / Azure-Kinect-Sensor-SDK

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.
https://Azure.com/Kinect
MIT License
1.49k stars 618 forks source link

Please expose more depth reconstruction settings #1308

Open drewm1980 opened 4 years ago

drewm1980 commented 4 years ago

Is your feature request related to a problem? Please describe.

3D object features well below the resolution of the sensor get filtered away.

Describe the solution you'd like

Please provide more API (and viewer) control over how heavy the filtering is. The SDK currently only seems to provide one setting related to 3d reconstruction (the Depth Mode). By comparison, Intel's Realsense L515 drivers provide much more control over the emitter, sensor, and post-processing functions.

I can't make concrete suggestions since I don't know what is actually performed in the sensor, but I am particularly interested in reducing the amount of spatial and temporal filtering that seems to be performed within the sensor. Also reducing emitter power can help with artifacts related to saturation.

Brekel commented 4 years ago

Fully agree!

The current default settings for generating the depth data are generally excellent (and better than competing sensors) but for advanced users it could definitely help being able to tune things for specific scenarios.

For example scenarios where subjects are close vs far away from the sensor may benefit from different settings. Some scenarios may benefit from disabling filtering/post processing or making it less aggressive. When latency is most important could benefit from disabling some things maybe even if it increases noise.

The same holds true for body tracking, the fact that the team implemented a setting for joint smoothing (and the ability to disable it) made a huge difference for my applications at least.

The more settings the better, and if things changes for future SDKs or sensor models that is generally not an issue for advanced users anyway.

ialhashim commented 4 years ago

That would be great. We are researching depth completion and estimation and it will help us a lot to get the raw data from the sensor and hopefully get more control over other aspects of the emitter and sensor.

HeavyRain-zym commented 3 years ago

raw depth data was adjusted for optimazing, but we want raw depth and pointcloud like kinect 2.0 to scan object's details.

DrMaxP commented 3 years ago

I'd echo this request. Having more control over depth data (preprocessing) would be great. What's the status of this?