microsoft / Azure-Kinect-Sensor-SDK

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.
https://Azure.com/Kinect
MIT License
1.49k stars 619 forks source link

Device position and orientation estimation with IMU and camera #948

Open spyoha opened 4 years ago

spyoha commented 4 years ago

I hope Azure Kinect can also provide its (relative)position and orientation information, not only raw acceleration value.(just like the Hololens can do splendidly)

I've tried to make these feature, based on raw IMU data that I can get from the latest SDK(1.3.0)

But these problems made it challenging

I wonder if anyone else already trying similar work for Azure Kinect

Chris45215 commented 4 years ago

With the existing hardware, it should be possible to provide some orientation (rotation) data just from the IMU sensors - enough to determine the pitch and roll of the camera. This would be very helpful even if the resulting numbers are a bit rough, but the same information can also be acquired with a floor estimation function. A built-in compass would add the yaw direction, at least relative to north (or relative to neighboring cameras) - and this would make it much easier to coordinate multiple cameras that overlook the same area. It would mostly resolve the orientation coordinates, so only the position needs to be solved.

PierrePlantard commented 4 years ago

We implemented a fusion algorithm (madwick and mahony) in order to obtain orientation from accelerometer and gyroscope data. It works fine for pitch and roll, but the yaw drift is very high (20° of yaw drift error in 1 minute). Do you plan to develop a more robust solution to obtain orientation of the sensor ?

PierrePlantard commented 4 years ago

It is very important for vision application (robotics, mobile tracking) to provide a VIO (visual inertial odometry) method in order to get position and orientation of the camera. All the hardware components are already embedded in the Kinect Azure (IMU, depth camera, color camera), it would be a shame not to use them. All the competitor already provide such feature with the SDK of their depth cameras (intel realsense, Zed stereo-camera) and a lot of AR mobile development kit also (ArKit, ArCore,...). Do you plan to develop such feature ?

HiroyukiSakoh commented 4 years ago

https://feedback.azure.com/forums/920053-azure-kinect-dk/suggestions/38471407-imu-example-or-api https://feedback.azure.com/forums/920053-azure-kinect-dk/suggestions/40198582-please-provide-a-visual-inertial-odometry-vio-me https://feedback.azure.com/forums/920053-azure-kinect-dk/suggestions/39018925-add-slam-function

neilDGD commented 4 years ago

We would like pose and position estimation for SLAM too, for scanning live environments (one of many possible uses in live VFX production). Without extensive research into this problem (and maybe still with noisey or biased data) the IMU data provided from the SDK is not useful, for us at least. We are not primarily a R&D lab so manpower into solving such problems is non-productive. Any solution or example material would be much appreciated.

dnlwbr commented 3 years ago

Any news regarding this?

Thaina commented 1 year ago

Were there any progress on this?