microsoft / Azure_Kinect_ROS_Driver

A ROS sensor driver for the Azure Kinect Developer Kit.
MIT License
301 stars 223 forks source link

Add confidence values for body tracking joint data #87

Open RoseFlunder opened 5 years ago

RoseFlunder commented 5 years ago

With the new body tracking sdk 0.9.4 each joint has a confidence value which indicates if the joint is out of range, not observed or observed:

/** k4abt_joint_confidence_level_t
 *
 * \remarks
 * This enumeration specifies the joint confidence level.

 */
typedef enum
{
    K4ABT_JOINT_CONFIDENCE_NONE = 0,          /**< The joint is out of range (too far from depth camera) */
    K4ABT_JOINT_CONFIDENCE_LOW = 1,           /**< The joint is not observed (likely due to occlusion), predicted joint pose */
    K4ABT_JOINT_CONFIDENCE_MEDIUM = 2,        /**< Medium confidence in joint pose. Current SDK will only provide joints up to this confidence level */
    K4ABT_JOINT_CONFIDENCE_HIGH = 3,          /**< High confidence in joint pose. Placeholder for future SDK */
    K4ABT_JOINT_CONFIDENCE_LEVELS_COUNT = 4,  /**< The total number of confidence levels. */
} k4abt_joint_confidence_level_t;

We should publish this information together with the position of the joints. Currently we use a marker array which can be displayed easily in RViz: http://docs.ros.org/melodic/api/visualization_msgs/html/msg/MarkerArray.html

Each Marker has an ID, which is a combination of body ID & joint ID, and a position: http://docs.ros.org/melodic/api/visualization_msgs/html/msg/Marker.html

The question is: Where do we put in a confidence value? I guess it would be nice to stick with ROS standard messages but I don't see field that would fit for this use case. We could put it in the "text" field because its unused for non-text-markers and therefore free in our case. But this will show a warning in RViz: "Non empty marker text is ignored" So its not so nice. Any other ideas?

bearpaw commented 5 years ago

Can we use the alpha in colorrgba message? https://github.com/microsoft/Azure_Kinect_ROS_Driver/blob/melodic/src/k4a_ros_device.cpp#L780

RoseFlunder commented 5 years ago

Hmm that would mean that "CONFIDENCE_NONE" joints would be fully transparent in RViz (alpha=0). All other levels should be intransparent because of alpha >= 1. Is this better than using the text field or even more confusing?

RoseFlunder commented 5 years ago

I guess I could extend the ID, which is already a combination of body id and joint id, with the confidence level as well.

For example an ID of 1021 would mean: body id = 1 joint id = 02 confidence = 1

ID of 12223 body id = 12 joint id = 22 confidence level = 3

The level is only 0 to 4, so one decimal place is ok The joint id gets two decimals as before and the rest is for the body id.

Publisher calculates it this way: marker_id = body_id 1000 + join_id 10 + confidence_level

Clients could calculate it like this: body_id = marker_id / 1000 joint_id = (marker_id % 1000) / 10 confidence = marker_id % 10

Are there any flaws with this? Would still be kind of human readable compared to bit shifting things.

EDIT: Nevermind, marker_id can't have the confidence in it. It must only contain body_id & joint_id If we would add confidence the old markers for the same body id & joint id would not be replaced automatically in rviz when the confidence levels changes. Or we would need to a send delete Marker message in between two Messages with filled data but thats not pretty I think..

d-walsh commented 5 years ago

Perhaps it would be good to publish two different topics. One topic for visualization and a separate topic for other nodes to interpret the BodyTracking data.

1) Visualization = MarkerArray

2) Other nodes = Custom message

RoseFlunder commented 5 years ago

That would be optimal way if we want to introduce custom messages. @skalldri Whats your opinion about standard vs custom messages?

About 1: We already use different colors depending on the body ID like the simple viewer from SDK. For example body 1 = green, body 2 = red etc.

ooeygui commented 5 years ago

I think having both options would be a good idea. If you need confidence values, subscribe to the new message; otherwise use the standard.

If we go down that path, Please do it in two different pull requests.

The first change request introducing the custom message infrastructure (including moving the current codebase down a level and adding a new node), the second would be adding the custom message and implementation.

Make sense?

d-walsh commented 5 years ago

About 1: We already use different colors depending on the body ID like the simple viewer from SDK. For example body 1 = green, body 2 = red etc.

You could also use different namespaces in the Marker message (the "ns" variable) to be able to enable/disable a subset of the markers in RViz. http://docs.ros.org/melodic/api/visualization_msgs/html/msg/Marker.html

Either separating based on Person or Confidence value?

AswinkarthikeyenAK commented 4 years ago

Hi Guys, Is it possible to obtain the pose and perform joint tracking in RViz using Kinect Azure camera? Is there any ros package that can do this? I am looking for something like openpose package.

Thanks

ooeygui commented 4 years ago

@AswinkarthikeyenAK Yes, the data is available in the SDK, but it hasn't been plumbed through the ROS node. We had a discussion about how it could be done. I added the help wanted tag as this work hasn't made it to the top of the teams' priority queue.

AswinkarthikeyenAK commented 3 years ago

@ooeygui, I noticed the body tracking SDK shows the TF frames of the joints in the k4abt_simple_3d_viewer, but the ros publishes the body joint information as marker arrays. Is there a way to visualize the TF in RViz like shown in the k4abt_simple_3d_viewer?

Thanks

ravijo commented 2 years ago

Is it possible to obtain the pose and perform joint tracking in RViz using Kinect Azure camera? Is there any ros package that can do this? I am looking for something like openpose package.

It is an old discussion, yet open. So I will quickly provide a reference for future readers. Please check out the following URL: https://github.com/ravijo/ros_openpose

ooeygui commented 2 years ago

Thanks for the ping.

There is a ROS REP for Human Robot Interaction which includes a definition of how people and skeletons are presented. https://github.com/ros-infrastructure/rep/pull/338.

We intend to converge Azure Kinect body tracking when this REP is accepted. (and I'm reviewing to see what changes need to be made so we can align to the kinect body tracking).

I am also going to state that new features like this will be for ROS2 only, not ROS1.