Closed yuchen-ji closed 1 year ago
Hello,
Thank you so much!
Hello,
- yes, the dataset provides skeleton and RGB information divided by "actions". Therefore each action is a label that you may use for action recognition, using methods that use skeleton data or RGB frames, or a combination of both, as you like.
- As far as I know, SMPL is based on meshes, which in this case were not acquired. You may try to reconstruct the mesh using blender or other 3D software.
Hello, I have another questions as follow:
Hi,
the keypoints are a modified version of the COCO 2D Keypoints format. See the precise definition here: https://github.com/federicocunico/human-robot-collaboration/blob/master/datasets/chico_dataset.py#L66
In particular, w.r.t. COCO definition, ears and eyes are not present, and the "hip" joint (the center of the hip) has been added as interpolation of left hip and right hip. Note that the order of the keypoint is different from COCO, as you can see in the link above.
You may definitely try. I do not see any reason to discourage you to try it. Remember, w.r.t. CHICO dataset, that the single acquisitions can be seen as a sort of action in loop. They are not trimmed tho.
Hello, I just dabbled in this field, and I want to ask you some questions.