wmcnally / kapao

KAPAO is an efficient single-stage human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.
GNU General Public License v3.0
753 stars 103 forks source link

Kapao output is flip of Coco data #56

Closed nikhilchh closed 2 years ago

nikhilchh commented 2 years ago

IS kapao out a flipped version of COCO data ?

Keypoints which are right in coco seems left in kapao output. Is this is the case ? If yes why ?

wmcnally commented 2 years ago

That should not be the case. Can you provide an example where the keypoints are flipped?

nikhilchh commented 2 years ago

in the demos/image.py file:

model(img, augment=True, kp_flip=data['kp_flip'], scales=data['scales'], flips=data['flips'])[0]

I think might be causing it. Isn't augment supposed to be False for inferencing ?

nikhilchh commented 2 years ago

Or maybe my understanding of left and right is wrong.

Does left mean "left from the perspective of the viewer of the image" or "Left from perspective of the person being labelled" ?

wmcnally commented 2 years ago

It is from the perspective of the person being labelled.

nikhilchh commented 2 years ago

thanks :)