Closed ChChen666 closed 2 months ago
Sorry for the confusion. Please refer to this issue https://github.com/QitaoZhao/ContextAware-PoseFormer/issues/4#issuecomment-1885569798. We used keypoint from CPN (which can be different from the feature backbone used) in our paper to keep the same setting with other papers. However, we do have pre-processed HRNet keypoint in https://drive.google.com/drive/folders/1OYKWnu_5GPLRfceD3Psf4-JZkloBodKx. You may need to adjust a few lines in https://github.com/QitaoZhao/ContextAware-PoseFormer/blob/main/ContextPose/mvn/datasets/human36m.py. Check that issue for more details!
Great Work! Following the code you provided, we can only obtain results with HRNet as the backbone of image features and CPN as the 2D detector, which is not consistent with the results in the paper. Could you please provide the pre-processed labels for the 2D detection results with HRNet?