hongsukchoi / Pose2Mesh_RELEASE

Official Pytorch implementation of "Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose", ECCV 2020
MIT License
677 stars 69 forks source link

Ground truth mesh image correspondence #18

Closed Shubhendu-Jena closed 3 years ago

Shubhendu-Jena commented 3 years ago

Hi,

Thank you for the great work. I had a small question about the data. For 3DPW and COCO, which has images with multiple people in it (even after cropping using the bounding box coordinates provided in your annotations), how did you decide which person to fit the mesh for? Attaching a cropped example for reference from the 3DPW dataset. I'd be grateful if you could let me know as I want to find out the correspondence between the person and the ground truth mesh for every image in COCO/3DPW datasets.

Thanks in advance :) full3

hongsukchoi commented 3 years ago

Hi Shubhendu,

The GT meshes of 3DPW are provided in the original annotation along with the 2dd joint coordinates, bounding boxes and so on. We obtained pseudo-GT meshes of COCO by fitting SMPL parameters to the 2D joint coordinates in their annotation.

Shubhendu-Jena commented 3 years ago

Hi. Thank you for the quick response. Thanks for the insight. However, my question is that for a particular image frame, I can see only one vector smpl shape (of size 10) and pose param (of size 72) while there is more than one person in the image frame. This means that the mesh has been fit to one of the persons in the image frame. Is there a way to find which one that is? Screenshot from 2021-05-31 15-53-33

Shubhendu-Jena commented 3 years ago

Apologies. Issue solved. Closing now.