egeozsoy / 4D-OR

Official code of the paper 4D-OR: Semantic Scene Graphs for OR Domain Modeling accepted at MICCAI 2022. This repo includes both the dataset and our code.
MIT License
43 stars 1 forks source link

Inquiry about xyz position scale among different annotations #6

Closed lxj22 closed 6 months ago

lxj22 commented 6 months ago

Hello, I am wondering how I can use a unified scale to locate the position of both humans and objects such as "anesthesia_equipment", I use annotations in export_holistic_take_processed folder as human position points and use GroupFree3D dataset's "center_label" as object's position, but it seems that they are in different scales or processed with different standard methods. How can I get the unified scaled position of both human and object? Thank you.

egeozsoy commented 6 months ago

Indeed they are likely to be not in the same space.

We use this function to transform human pose annotation to our coordinate system:

def coord_transform_human_pose_tool_to_world(arr):
    arr *= 25

    arr[:, 2] += 1000

    arr[:, 1] *= -1

    arr = arr[:, [0, 2, 1]]

    return arr

You might also need to multiply the center label from object pose with 1000, I am not quite sure about this. But it is definitely possible to overlay these on top of each other, as we show in our visualization here: https://static-content.springer.com/esm/chp%3A10.1007%2F978-3-031-16449-1_45/MediaObjects/539250_1_En_45_MOESM1_ESM.mp4

lxj22 commented 6 months ago

This is very helpful, thank you!