microsoft / RoomAliveToolkit

Other
714 stars 191 forks source link

Compatibility with other depth cameras #51

Open MikeGameDev opened 7 years ago

MikeGameDev commented 7 years ago

Hey,

I know the tool kit only reads and processes data from the kinect v2, but theoretically speaking, if I were to modify the tool kit to take in data streams from other depth cameras such as Intel's f200, would the calibration math still work?

I can't see any reason why it wouldn't, it doesn't seem like the tool kit relies on the kinect sdk for anything other than reading from the camera streams. I figured I'd try asking here before I tried.

Thanks

thundercarrot commented 7 years ago

Support for other depth cameras is something we've been thinking about for a while. Most depth cameras out there do not have the range necessary for room-scale applications however. The Kinect2 depth camera has a range of up to 8m.

The toolkit uses the Kinect SDK for a bit more than streaming. Depth and color camera parameters are calculated using the CoordinateMapper methods from the Kinect SDK (see Kinect2Calibration.cs). It may be that other depth camera SDKs provide those numbers for you. I don't know about the Intel RealSense SDK. If you find out, do let us know!

In theory if the camera fully supports the newer Windows 10 capture APIs, these calibration figures should be available.

MikeGameDev commented 7 years ago

Thanks for the response!

Seems that I overlooked that bit in the Kinect2Calibration class, if I understand correctly the RecoverCalibrationFromSensor function is creating a bunch of points in the Kinect's view frustum and tries to map them to pixels in the depth and color images; it then passes these point correspondences to various functions to create the intrinsic matrices for the color and depth cameras. Is this correct? If so I should be able to use RealSense's mapping functions in place of the Kinect SDK ones. Having said that, it does seem that RealSense provides access to the sensor's intrinsic directly.

I'm actually not going to be doing room scale stuff, I'm using a HP Sprout. That said, I have used the roomalive tool kit for room scale projects in the past, infact it's the only calibration software that worked for me first try! I'm hoping I can get this working Sprout without too much pain...

Thanks again!

thundercarrot commented 7 years ago

The Kinect2Calibration class has the intrinsics for the depth camera and color camera, as well as the relative pose between the two cameras. Both are necessary. Beware of coordinate conventions like y up/down x left/right, normalized pixel units, etc. Good luck!

MikeGameDev commented 7 years ago

I've got a C# question if you don't mind me asking, is it safe to create a "RealSenseServer" class which inherits from the KinectServer2 interface defined in KinectClient.cs? If I can do this then all I need to do is set the Client reference in the ProjectorEnsemble.Camera class to a "RealSenseServer" object or to a "KinectServer2Client" object depending on which type of camera is being used. The proposed "RealSenseServer" would behave identically to the KinectServer2Client but instead of pulling data from the KinectServer, it will use the equivalent functions in the real sense library.

Normally I would have just inherited from the interface without hesitation, but the "This code was generated by a tool" comment found at the top of KinectClient.cs threw me off because I don't know anything about how this code was generated. Could you please shed some light on this? What is this code being generated from?

Most of the work I do is in C/C++, the only C# I ever write is in Unity... so I guess what I'm trying to say is I'm not super familiar with all of the features of C#

Any help would be greatly appreciated!

thundercarrot commented 7 years ago

Sorry I missed this question until now. The client/server uses WCF. The auto generated code is a WCF (elsewhere there is a comment that gives the command that generates it). If you've ever used RPC in the old days this overall approach should seem familiar. In a future release we are likely to use something other than WCF.

jensgrubert commented 6 years ago

Hi,

are there any updates on the integration of other depth cameras? This issue gets worse as the Kinect V2 is discontinued. We do not even get our hands at used Kinects V2 for acceptable prices in Germany anymore. An integration of e.g., the Intel RealSense cameras could ensure that the toolkit could be used in the future for new projects.

Best, Jens

thundercarrot commented 6 years ago

Hi Jens, no updates yet. Most of the cameras I see don't have the range to be useful at room scale.

The recently announced Intel RealSense D series looks like it might be worth looking into when they are available.

Do you have a particular camera in mind?

jensgrubert commented 6 years ago

Hi Andy,

the Intel RealSense D415 / D435 will be cameras we would like to investigate, once they become available. Maybe we can join forces in making the integration happen. I might get a student to work on this starting January. If you are interested, we could sync separately on the exact tasks to be done.

Best, Jens

ehtick commented 6 years ago

Would be great if the SDK would support the new Intel Realsense D415 / D435. We used the kinect 2 camera and now have no option to continue with this. Can 't wait for that :)

regards

Eddy

jensgrubert commented 6 years ago

Hi Andy,

finally, a student of mine is working on the integration of the Intel RealSense D400 series and the ZED stereo camera. We probably need to change some of your base classes due to hardcoded Kinect parameters. Will keep you posted, once we have working solution. If you have any suggestions, we would be happy to integrate them.

Best, Jens

kevinwertman commented 6 years ago

@jensgrubert

Hi Jens,

I have just been asked to look into tackling this same integration and was wondering how your team has been progressing? I may have the opportunity to pitch in as well.

Thanks, Kevin

Luchianno commented 5 years ago

@thundercarrot @jensgrubert Hello!

I've been following this thread for over a year now and I would also like to know what's the progress on integration of other cameras. How is that student progressing? Also, I'm curious if you need any help on the Unity side of the toolkit, like providing examples or updating/expanding the existing package.

With respect, Luka

thundercarrot commented 5 years ago

Luka,

I've been focusing all my RoomAlive efforts on a completely new release that incorporates many improvements, including support for RealSense2 and Kinect for Azure cameras. Unity support has been completely reworked. Calibration calculations have been completely rewritten with much more robust support for multiple camera setups. There's a new REST API server for cameras. JSON file format. etc etc.

I'm very excited to show you guys but it's just not ready yet. I hope to have something out in the Kinect for Azure timeframe, as I think that release will rekindle some interest in Kinect.

Thanks for your interest! Andy

rfilkov commented 5 years ago

This is a great news!

jensgrubert commented 5 years ago

Thanks for the update Andy. My student eventually had to abort the integration on the old codebase as too many changes were required (and by that time the realsense and ZED APIs were quite buggy, too).

Looking forward to a new relase.

Best, Jens

Munykang commented 4 years ago

Hello friends, I am looking for information for a therapy project in which we use a kinect v2 camera, but because they can no longer be purchased, we are looking for information to know if we can adapt our code to be used with an intel realsense 435i camera for full body tracking

idchristianaubert commented 3 years ago

Andy, any news on updated RoomAlive with support for Realsense?

thundercarrot commented 3 years ago

I had to put my plans for releasing the new stuff on hold for a bit last year but the project is still very much alive (see this most recent work of mine that uses it https://youtu.be/s_IFDEEYEpI). I have no definitive dates for when it will be released but will update this discussion when I have further news. Thanks!

idchristianaubert commented 3 years ago

That looks mighty impressive. Any chance realsense cameras will be supported?

thundercarrot commented 3 years ago

Yes RealSense cameras will be supported.

infixlabs commented 3 years ago

Any news on an updated version?

adielfernandez commented 2 years ago

Hello! Just checking in to see if the updated version with Kinect Azure support is ready for some beta testing! Any updates or places we could go to see how we might approach Azure integration?

MoritzSkowronski commented 1 year ago

I would also be very much interested in do some beta testing for the Azure Kinect, if there are any updates for this! :)

P0trak commented 1 year ago

Hi, I'm currently trying to make the calibration work with the Azure Kinect - has anyone made any progress with this?

pzhine commented 1 year ago

Luka,

I've been focusing all my RoomAlive efforts on a completely new release that incorporates many improvements, including support for RealSense2 and Kinect for Azure cameras. Unity support has been completely reworked. Calibration calculations have been completely rewritten with much more robust support for multiple camera setups. There's a new REST API server for cameras. JSON file format. etc etc.

I'm very excited to show you guys but it's just not ready yet. I hope to have something out in the Kinect for Azure timeframe, as I think that release will rekindle some interest in Kinect.

Thanks for your interest! Andy

Hi @thundercarrot, thank you for all of your hard work on this project! I'm wondering if you could post your work to date on the new version as a branch to the RoomAliveToolkit repo, even if it doesn't have all the kinks worked out. That way, the community (including my team) can contribute, which is in line with the spirit of OSS.

For background, we are a team of researchers at the Full-Body Interaction Lab (part of Universitat Pompeu Fabra in Spain), working on a projective mixed reality device. We are following in the path blazed by your team in terms of real-time projection mapping with dynamic depth data, and our device is like an untethered version of the one built by Molyeaux et al in 2012 (https://link.springer.com/chapter/10.1007/978-3-642-31205-2_13). You can see a demo of an early version of the prototype without projection mapping here: https://emil-xr.eu/lighthouse-projects/upf-ar-magic-lantern/.

Our new prototype incorporates the RealSense D455, so my next task is to adapt the Kinect code to work with the depth and color data from that camera. Reading through this thread, it seems non-trivial, so any advice or code you can share would be very much appreciated. Thanks!!!!