mkocabas / EpipolarPose

Self-Supervised Learning of 3D Human Pose using Multi-view Geometry (CVPR2019)
Other
597 stars 97 forks source link

Test my own picture #9

Closed zhanghczzz closed 4 years ago

zhanghczzz commented 5 years ago

Thanks for your work! I have tried your demo.ipynb file. It works well when I use pictures from H36M. Speed and accuracy are both perfect. However, I get wrong result if I use my own picture. I think the picture needs some modification before testing. Can you tell me the requirement of the picture? My wrong result as follow. 404

mkocabas commented 5 years ago

Dear @zhhaochen,

Thanks for your interest in our paper!

Currently, our project only support inference with Human 3.6 M images, hence it is possible to get bad results on in-the-wild images. However if you have a dataset with multi-view images, you can use our training strategy to leverage epipolar geometry to generate pseudo ground truth.

Otherwise you can take a look at other projects like this and that, that generates plausible results with in-the-wild images.

zhanghczzz commented 5 years ago

Dear @mkocabas

Thanks for your reply and recommended projects.I wonder if you have test in-the-wild images and what's the result?

Besides, I find that you have used 3DHP dataset to demonstrate further applicability of your method, and the results look good. But I got wrong result in the test. Can you tell me what makes the failure?

Looking forward to your reply.

initialneil commented 5 years ago

Thanks for sharing the code! Unfortunately, I'm seeing the same strong over-fitting problem. I tested some images from 3DHP dataset. It was pretty bad. Is there anything we change in the code to make it better? image image image

mkocabas commented 5 years ago

Dear @zhhaochen,

Thanks for your reply and recommended projects.I wonder if you have test in-the-wild images and what's the result?

The projects that I linked are giving decent results with in the wild images but they are not perfect.

Besides, I find that you have used 3DHP dataset to demonstrate further applicability of your method, and the results look good. But I got wrong result in the test. Can you tell me what makes the failure?

Provided pretrained models are trained only on either Human3.6M or MPII thus it is normal to get bad results on 3DHP. Currently, we don't support 3DHP dataset but you can easily write your own dataloader to train with it.

Dear @initialneil,

Above answers might adress your concerns. Your test images are from 3DHP and pretrained models are for Human3.6M only.

mkocabas commented 4 years ago

Recently, we released a new 3D pose+shape estimation model that can work with in-the-wild videos. If you are still interested in this, you may refer to https://github.com/mkocabas/VIBE.

Thanks!