hongsukchoi / 3DCrowdNet_RELEASE

Official Pytorch implementation of "Learning to Estimate Robust 3D Human Mesh from In-the-Wild Crowded Scenes", CVPR 2022
MIT License
155 stars 15 forks source link

How to learn trans? #33

Open lllll8 opened 9 months ago

lllll8 commented 9 months ago

The groundtruth of SMPL trans parameters range varies widely for each dataset, How to let the network learn the correct trans, combined with the focal and the princpt to complete the correct projection? I noticed you set focal = 5000, princpt = 256/2, how to understand this value? Thank you!!

hongsukchoi commented 9 months ago

Hi,

The groundtruth of SMPL 'trans' parameters are not used for training, since we crop the image around the target person.

The focal length is just a default number, which can be changed. THe princpt is the center of the cropped and resized image.

focal = (5000, 5000)  # virtual focal lengths
    princpt = (input_img_shape[1] / 2, input_img_shape[0] / 2)  # virtual principal point position
lllll8 commented 9 months ago

Thank you for your reply. I understand that you say about the meaning of virtual focal and principal point. this project is so amazing.