ShirleyMaxx / ContextPose-PyTorch-release

[CVPR 2021] Offical Pytorch implementation of "ContextPose: Context Modeling in 3D Human Pose Estimation: A Unified Perspective"
47 stars 3 forks source link

Reproducing reported performance #2

Closed JingweiJ closed 5 months ago

JingweiJ commented 2 years ago

Hi @ShirleyMaxx ,

Thanks a lot for the great work and releasing this codebase!

I'm trying to reproduce the reported performance of 43.4mm in H36M. After following the instruction of preparing data and pretrained models, I'm simply using 1 GPU to train with the config file of human36m_vol_softmax_single.yaml. Instead of running for 9999 epochs, I trained for 30 epochs as indicated in the paper. However the best result I got is MPJE=55.0mm (The per_pose_error.Average.Average metric in metrics.json).

Would you mind clarifying if I'm doing something wrong? How should I modify the config file to reproduce the best performance?

ShirleyMaxx commented 2 years ago

Hi, thanks for your attention and sorry for my late reply!

Since this work focuses on relative pose estimation, we first align the root before computing MPJPE following the standard practice. Therefore, you could check the per_pose_error_relative.Average.Average metric in metric.json, and that is the reported metric we've used.

By the way, if you only use 1 GPU for training, you may also need more epochs to get a better result, could you please try running 60 epochs (you could resume the training from 30 epochs in our code) and check the root-aligned metric at that time? We've tried training several times using 4 GPUs, and 43.4mm is not the best result. So I think that enough training would work!