akanazawa / hmr

Project page for End-to-end Recovery of Human Shape and Pose
Other
1.55k stars 391 forks source link

using openpose keypoints #39

Closed Jhfelectric closed 6 years ago

Jhfelectric commented 6 years ago

Hello I am trying to use the json keypoints from OpenPose as an input to your demo.py. I have modified the src/util/openpose.py to accept 'pose_keypoints_2d' instead of 'pose_keypoints', but it seems my keypoints aren't used at all.

See these images as an example (note: I have cropped original images so they fit in this box):

first image: pose/skel render from OpenPose (BODY_25) image second image: pose/skel render from hmr image third image: pose overlay from hmr image

Looking at those images, obviously my keypoints got discarded ; the pose of the overlayed mesh is the same as the pose/skel rendered by hmr.

What could I have done wrong ? Thanks for this great code and any help !

Hmr: Ubunto 18 on windows Hyper-V Python 2.7, Tensor flow no GPU $python -m demo --img_path data/me.jpg --json_path data/me_keypoints.json OpenPose: Windows pre-built demo from git C:\OpenPose>bin\openposedemo.exe --image_dir I:\images -write_images I:\images\out --write_json I:\images\out

nwsterling commented 6 years ago

Same question here :)

Similarly, I am unable to get good HMR pose approximation, so I want to try using OpenPose JSON output. However, since I am using the Tensor flow port of OpenPose, I need to write my own keypoints_pose JSON exporter function.

Could one please clarify the formatting for the JSON keypoints_pose array? My understanding is that it is as follows...

... keypoints_pose = [x1, y1, score1, x2, y2, score2, ...] ...

Is that correct? In addition, which key points are included in keypoints_pose?

Many thanks!

Jhfelectric commented 6 years ago

This doc : https://github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/output.md should help you I guess. Extract:

An array pose_keypoints_2d containing the body part locations and detection confidence formatted as x1,y1,c1,x2,y2,c2,.... The coordinates x and y can be normalized to the range [0,1], [-1,1], [0, source size], [0, output size], etc., depending on the flag keypoint_scale (see flag for more information), while c is the confidence score in the range [0,1].

Jhfelectric commented 6 years ago

After reading again the docs for the 36th time, I realize what the following paragraph could mean:

Images should be tightly cropped, where the height of the person is roughly 150px. On images that are not tightly cropped, you can run openpose and supply its output json (run it with --write_json option). When json_path is specified, the demo will compute the right scale and bbox center to run HMR:

If I understand it correctly, OpenPose keypoints are used only to get a good bbox on bad images, but the actual skeleton is always created by hmr. Am I right ? The underlying question could then be: is it possible to use external keypoints in hmr in order to improve the pose estimation ?

Thanks again.

akanazawa commented 6 years ago

Hi all,

As @Jhfelectric points out, openpose keypoints are only used to get a good bounding boxes.

Yes! To improve the fit using the external keypoints, please look at SMPLify, it is an optimization based approach that solves for SMPL parameters that best explain the 2D keypoints. You can use HMR output as the initialization for SMPLify and solve for a better pose and shape that better fits the 2D keypoints.

Best,

Angjoo

Rayf0 commented 5 years ago

Hi,

@Jhfelectric, are you find some way to use external keypoints in hmr so far?

thank you!

sagar0041 commented 1 year ago

Hello everyone, Can someone please help me. I have generated the multiple .json file https://github.com/sagar0041/SignLanguage-ProjectWork/tree/master/Openpose-Task1/examples/outputdata/data/keypoints but with the keypoints I want to generate 3D meshes using SMPLX can anyone help me with this.