otaheri / GRAB

GRAB: A Dataset of Whole-Body Human Grasping of Objects
https://grab.is.tue.mpg.de
Other
266 stars 28 forks source link

About body data and object data #3

Closed LaLaLailalai closed 3 years ago

LaLaLailalai commented 3 years ago
  1. regarding the body data, what's the difference between the following data in body_data, including "body_pose", "fullpose" and "joints"(get from the codes in the attached figure1)? What the meaning of the size of these data, like what the meaning of 63 of 'body_pose', 127 of 'joints', and what the meaning of 165 of 'fullpose'? How to get a sequence of joints data both angle and coordinates? Which key in the data dict is the right one?

  2. As for the object data, I want to get a sequence of object vertices, is it right to get the sampled vertices with original coordinates firstly(which is object_data['verts'] with size (1024,3) ) for the first frame, and then rotate and translate it based on the object_data['global_orient'] and object_data['transl'] ? How to caluculate it? Is there any existing python function that I can use directly with the input of these params?

otaheri commented 3 years ago

Hi Mengyi, The guide on the structure of the data in each sequence is on the website under the downloads page.

  1. The smpl-x body model takes as input some parameters including joint rotations, global translation, and facial expressions, and as output gives the body mesh. The rotation parameters are grouped based on the different parts of the body and are: 'transl', 'global_orient', 'body_pose', 'jaw_pose', 'leye_pose', 'reye_pose', 'left_hand_pose', 'right_hand_pose'. The 'body_pose' rotations consist of all joints excluding hands, eyes, and jaw. The fullpose data is all the joint rotations concatenated together. The rotations are represented as axis-angle which has 3 components per joint that is why body pose has 63 components (which is 21 joints each having 3 components) and the fullpose has 165 components (55 joints each having 3 components) . We don't have the 'joints' in the saved sequences, but we can compute body joint locations by passing the parameters to the model as it is done here.

  2. Yes, you can load each object mesh (or get the sampled vertices for each object from object_info['verts_sampled'] as here[https://github.com/otaheri/GRAB/blob/184503c222f08ce47d2bebbea3a77dcd2b981ca3/grab/grab_preprocessing.py#L199] not from object_data['verts']) and rotate and translate them using the data in each frame. We have the python code to this in our repo and here we use the object model to do this.

Please do not confuse the body_data and object_data files here with the seq_data here. The seq_data is the data that we load from GRAB and use to get body meshes, object meshes, and other informations. Then we save the extracted data to object_data and body_data dictionaries.

LaLaLailalai commented 3 years ago

Hi Omid,

There is still one question about the body data. Do you mean that “joints_sbj” is the joints locations here? What’s the meaning of its size “127,3”? (I already save it use the same code of [https://github.com/otaheri/GRAB/blob/184503c222f08ce47d2bebbea3a77dcd2b981ca3/grab/save_grab_vertices.py#L95], and load it using dataloader.py.)

tanmayshankar commented 2 years ago

I believe the size 127x3 seems to be the first 127 joints from https://github.com/vchoutas/smplx/blob/master/smplx/joint_names.py , each of which are 3D positions in cartesian space.