Gengzigang / PCT

This is an official implementation of our CVPR 2023 paper "Human Pose as Compositional Tokens" (https://arxiv.org/pdf/2303.11638.pdf)
MIT License
311 stars 20 forks source link

h36m dataset and result reproduction #32

Open henryfungusa opened 5 months ago

henryfungusa commented 5 months ago

Hi, thanks for your insightful work. I was able to reproduce the paper's results on Coco dataset, and I am attempting to reproduce the results with H36m. However, I do not find any sample code nor instructions on that. Would you please help and share code or instruct on how you conducted the experiments on h36m?

henryfungusa commented 5 months ago

I printed the output of the model of multi_gpu_test(). The predicted depth values of all the joints in the same box are the same, so the predictions are practically a 2-D pose, not a 3-D pose. I am very curious about how your H36m test was conducted.

guzejungithub commented 2 months ago

Hi, thanks for your insightful work. I was able to reproduce the paper's results on Coco dataset, and I am attempting to reproduce the results with H36m. However, I do not find any sample code nor instructions on that. Would you please help and share code or instruct on how you conducted the experiments on h36m?

Hello,

I am unable to reproduce the results for the COCO validation set. I obtained results of 75.9 (base gt_box=false) and 78.2 (base gt_box=true). Are you getting the same results? These results differ by 1.6 from 77.5.