mks0601 / 3DMPPE_POSENET_RELEASE

Official PyTorch implementation of "Camera Distance-aware Top-down Approach for 3D Multi-person Pose Estimation from a Single RGB Image", ICCV 2019
MIT License
807 stars 147 forks source link

how did you split train and test for MPII #104

Closed nikhilchh closed 2 years ago

nikhilchh commented 2 years ago

You have provided json format for mpii train and test. how is the split created ?

mks0601 commented 2 years ago

I followed the official split.

nikhilchh commented 2 years ago

Independent questions: 1- How do you handle different datasets ? As every other data has different set of keypoints and different sequence. For example COCO is different from MPII.

I assume you would fix the network to output N keypoints which are common among all the datasets. And then write converters to bring all the dataset's to same page (same set and sequence of keypoints).

I wrote a lot of code which was very specific to COCO and now it seems like a lot of work to adapt for MPII. Would love to know your opinion on this topic.

2- Does most of the datasets provide 12 2d keypoints for shoulders, elbows, wrists, hips, knees and ankle ? 3-Is it very important to take care of subtle differences in the way different datasets label a certain keypoint ? Like shoulder from COCO might be a little off compared to shoulder from MPII ?

mks0601 commented 2 years ago
  1. In this repo, I used keypoint set of 3D datasets. See https://github.com/mks0601/3DMPPE_POSENET_RELEASE/blob/master/data/dataset.py
  2. yes
  3. It seems not really important for Human3.6M and MuPos-TS dataset evaluations.