hongsukchoi / Pose2Mesh_RELEASE

Official Pytorch implementation of "Pose2Mesh: Graph Convolutional Network for 3D Human Pose and Mesh Recovery from a 2D Human Pose", ECCV 2020
MIT License
678 stars 69 forks source link

Confused about the performance of Pose2mesh on Human3.6M #36

Closed Cakin-Kwong closed 2 years ago

Cakin-Kwong commented 2 years ago

The performance of Pose2mesh on Human3.6M: Training with Human3.6M: MPJPE:64.9 PA-MPJPE:48.0

Training with Human3.6M and COCO: MPJPE:67.9 PA-MPJPE:49.9

Best result: MPJPE:64.9 PA-MPJPE:46.3

As mentioned in the paper,using more datasets to train Pose2mesh will decrease the performance on Human3.6M. I wonder the best result on Human3.6M is supposed to be trained with Human3.6M dataset only? In this case, the best result should be the same with the one trained with Human3.6M. Or it should be trained with Human3.6M+COCO+MuCo? Would you please show me the training settings on Human3.6M?

hongsukchoi commented 2 years ago

the best result should be the same with the one trained with Human3.6M

I think so, as train and test sets of Human3.6M have the same action category and thus similar poses. The discussion is in the paper.

The training settings are at asset/yaml/

Cakin-Kwong commented 2 years ago

the best result should be the same with the one trained with Human3.6M

I think so, as train and test sets of Human3.6M have the same action category and thus similar poses. The discussion is in the paper.

Thanks for your quick reply. And as the best result should trained with Human3.6M. I think Table 5 and Table 8 should report the same PA-MPJPE?Is this a miswriting? paper_result

hongsukchoi commented 2 years ago

No. Actually the explanation is also in the paper.

When computing PA-MPJPE of Table 5, I used all camera images (4 in Human3.6M), which I think natural.

In Table 8, for PA-MPJPE, I used only the frontal camera image for fair comparison with previous works.

We measured the PA-MPJPE of Pose2Mesh on Human3.6M by testing only on the frontal camera set, following the previous works [23, 27, 28].

Cakin-Kwong commented 2 years ago

No. Actually the explanation is also in the paper.

When computing PA-MPJPE of Table 5, I used all camera images (4 in Human3.6M), which I think natural.

In Table 8, for PA-MPJPE, I used only the frontal camera image for fair comparison with previous works.

We measured the PA-MPJPE of Pose2Mesh on Human3.6M by testing only on the frontal camera set, following the previous works [23, 27, 28].

That solved my confusion. So in Table 8, the training set of Human3.6M is still [1,5,6,7,8] and the testing set is [9, 11] but only use the frontal camera set(camera 3)? Would you mind sharing the code to get this test set? Or how should I modify the code in Pose2mesh?

hongsukchoi commented 2 years ago

Yes.

You just need to skip the cam annotation '4', when loading test set. Like this

if cam != '4':  # front camera (Table 6)
    continue
Cakin-Kwong commented 2 years ago
  • So in Table 8, the training set of Human3.6M is still [1,5,6,7,8] and the testing set is [9, 11] but only use the frontal camera set(camera 3)?

Yes.

  • Would you mind sharing the code to get this test set? Or how should I modify the code in Pose2mesh?

You just need to skip the cam annotation '4', when loading test set. Like this

if cam != '4':  # front camera (Table 6)
    continue

Thanks, I have got the paper result followed your suggestion.