hongsukchoi / Pose2Mesh_RELEASE

Official Pytorch implementation of "Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose", ECCV 2020
MIT License
677 stars 69 forks source link

Wrong overall mesh volume for different person with certain bias #51

Open friendly-code-bot opened 2 years ago

friendly-code-bot commented 2 years ago

Hi ,

i try to use your amazing work to estimate a person volume by using the fitted SMPL Mesh. I was able to transform the Mesh in my respective camera coordinate system. I calculate for each body region (head, hands, arms, legs etc.) its respective volume and sum them up. I am using the world coordinates from the transformed mesh. A render to the image matches the persons silhouette.

But I found that different persons with different overall mesh representation seem to contain all similar volume of +- 0.05m^3.

It is crucial for my work to have a relatively good estimation of the persons volume but it seems that using the SMPL model wont be a good way to do this.

Could you give me ideas? Is this conversion from the SMPL Mesh to my camera coordinate system correct ? I feel it could be a scaling issue. I hope you could help me out.

BR

hongsukchoi commented 2 years ago

The GT mesh is on a meter scale during training. So the predicted meshes would be also on a meter scale.

The predicted meshes having a similar volume is inevitable, since there is no chance of knowing the actual absolute height of a person from 2D evidence.