microsoft / human-pose-estimation.pytorch

The project is an official implement of our ECCV2018 paper "Simple Baselines for Human Pose Estimation and Tracking(https://arxiv.org/abs/1804.06208)"
MIT License
2.94k stars 601 forks source link

Different AP using the same model on COCO dataset #55

Closed frankchen121212 closed 5 years ago

frankchen121212 commented 5 years ago
In your paper , you achieved 70.4AP using 256x192_pose_resnet_50, Results on COCO val2017 with detector having human AP of 56.4 on COCO val2017 dataset: Arch AP Ap .5 AP .75 AP (M) AP (L) AR AR .5 AR .75 AR (M) AR (L)
256x192_pose_resnet_50_d256d256d256 0.704 0.886 0.783 0.671 0.772 0.763 0.929 0.834 0.721 0.824

while after using the same method and the same model, even after training with our own gpus, the AP achieved is | Arch | AP | Ap .5 | AP .75 | AP (M) | AP (L) | AR | AR .5 | AR .75 | AR (M) | AR (L) | 2018-11-14 18:15:51,575 |---|---|---|---|---|---|---|---|---|---|---| 2018-11-14 18:15:51,575 | 256x192_pose_resnet_50_d256d256d256 | 0.723 | 0.925 | 0.794 | 0.697 | 0.765 | 0.755 | 0.932 | 0.820 | 0.723 | 0.802 |

Nearly 2percent higher .Wolud you be so kind to explian why?

Also,the validate command is not right. It should be: python pose_estimation/valid.py \ --cfg experiments/coco/resnet50/256x192_d256x3_adam_lr1e-3.yaml \ --flip-test \ --model-file models/pytorch/pose_coco/pose_resnet_50_256x192.pth.tar

Rahter than: python pose_estimation/valid.py \ --cfg experiments/mpii/resnet50/256x256_d256x3_adam_lr1e-3.yaml \ --flip-test \ --model-file models/pytorch/pose_coco/pose_resnet_50_256x256.pth.tar

The work is really remarkable, looking forward to your comment

lxtGH commented 5 years ago

See this issue. https://github.com/microsoft/human-pose-estimation.pytorch/issues/58

frankchen121212 commented 5 years ago

Thank you for your kind advice ~