Closed lxj22 closed 6 months ago
Indeed they are likely to be not in the same space.
We use this function to transform human pose annotation to our coordinate system:
def coord_transform_human_pose_tool_to_world(arr):
arr *= 25
arr[:, 2] += 1000
arr[:, 1] *= -1
arr = arr[:, [0, 2, 1]]
return arr
You might also need to multiply the center label from object pose with 1000, I am not quite sure about this. But it is definitely possible to overlay these on top of each other, as we show in our visualization here: https://static-content.springer.com/esm/chp%3A10.1007%2F978-3-031-16449-1_45/MediaObjects/539250_1_En_45_MOESM1_ESM.mp4
This is very helpful, thank you!
Hello, I am wondering how I can use a unified scale to locate the position of both humans and objects such as "anesthesia_equipment", I use annotations in export_holistic_take_processed folder as human position points and use GroupFree3D dataset's "center_label" as object's position, but it seems that they are in different scales or processed with different standard methods. How can I get the unified scaled position of both human and object? Thank you.