microsoft / Azure-Kinect-Sensor-SDK

A cross platform (Linux and Windows) user mode SDK to read data from your Azure Kinect device.
https://Azure.com/Kinect
MIT License
1.5k stars 619 forks source link

Get RGB body skeleton image from Azure Sensor Body Tracking DK #1014

Closed linstudionet closed 4 years ago

linstudionet commented 4 years ago

Is your feature request related to a problem? Please describe.

I'm currently developing an app using Azure Kinect Sensor Body tracking sdk in C# and I follow the code here, with the latest sdk version: https://github.com/microsoft/Azure-Kinect-Samples/tree/master/body-tracking-samples/csharp_3d_viewer

Of course I changed the device initialization to below: device.StartCameras(new DeviceConfiguration() { CameraFPS = FPS.FPS30, ColorResolution = ColorResolution.R720p, DepthMode = DepthMode.NFOV_Unbinned, WiredSyncMode = WiredSyncMode.Standalone, });

But still I see the Black&White body skeleton output on screen, but I want a colored body skeleton image, can someone teach me how to do it?

Describe the solution you'd like

A function that returns RGB image in bitmap form would be appreciated. Similar to line 200 in https://github.com/microsoft/Azure-Kinect-Samples/blob/master/build2019/csharp/1%20-%20AcquiringImages/MainWindow.xaml.cs

Describe alternatives you've considered

Any suggestion with detailed example code would be appreciated

Additional context

linstudionet commented 4 years ago

Hi Rabbitdaxi, what we need to do is to overlay the skeleton on the RGB camera view. how can we call the RGB camera?

we may be able to use the depth map, but still we want the depth map to overlay in a transparent manner over the RGB camera view.

for our application, we will have a person that invoke the data capture, and this person need to be able to see the scene, in the RGB camera. the depth map is not a good image background for our case. {i'm pensyl... you will be answering to our software engineer, Lin)

we are in a bit of the deadline crunch... so if we can get a quick answer, it will help ALOT... thanks

wes-b commented 4 years ago

@linatpensyl is this still an open issue for you? A tool like OpenCV is helpful for overlaying these images. You can also colorize the depth image similarly to what K4aViewer does with the ColorizeBlueToRed function.

linstudionet commented 4 years ago

Yeah. This is still an open issue for me. I follow the from the sample code to use OpenGL here: https://github.com/microsoft/Azure-Kinect-Samples/blob/master/body-tracking-samples/csharp_3d_viewer/Renderer.cs So in this sample code, in which line should I colorize the depth image using ColorizeBlueToRed function, and what is the counterpart of ColorizeBlueToRed(C++) in C# library?

wes-b commented 4 years ago

You will have to create a CS version, I don't know of one to use as a sample.

linstudionet commented 4 years ago

What I am trying to ask is whether we are able to draw the skeleton on top of RGB captured from the Kinect RGB camera instead of the depth camera.

rabbitdaxi commented 4 years ago

@linatpensyl if you just want to draw skeleton on RGB camera, you should be able to do that. e.g. if you have the skeleton joints in depth camera coordinate system, let us a float3 position, you can easily call the k4a_calibration_3d_to_2d() function by passing the source camera is depth and target camera as color.

If you are asking future overlay body tracking segmentation image on RGB, you can also do that by another way of transform the entire segmentation image from depth camera to color camera with k4a_transformation_depth_image_to_color_camera_custom()

The idea of the transformation functions or how to use them depends on whether you just want to transform a few points or the entire image. You can refer to https://docs.microsoft.com/en-us/azure/Kinect-dk/use-image-transformation and https://docs.microsoft.com/en-us/azure/Kinect-dk/use-calibration-functions

yijiew commented 4 years ago

@linatpensyl Please find this GitHub sample code that transforms the skeletons from the depth space to RGB camera space: https://github.com/microsoft/Azure-Kinect-Samples/tree/master/body-tracking-samples/camera_space_transform_sample After you finish the transformation, you can plot the 2d joints on images with your preferred tool like OpenGL or OpenCV. However, the details of visualization is out of the scope of our SDK. You should be able to find answers from online about how to visualize RGB image and how to draw points and lines on top of those images. GitHub issue is not used to ask questions, but it is used to fire bugs. I will close this issue for now. If you want further discussion about visualization, please ask questions on StackOverflow.