Garfield-kh / PoseTriplet

[CVPR 2022] PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision (Oral)
MIT License
305 stars 25 forks source link
cvpr2022 data-augmentation motion-generation motion-imitation pose-estimation

PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision

Kehong Gong*, Bingbing Li*, Jianfeng Zhang*, Tao Wang*, Jing Huang, Bi Mi, Jiashi Feng, Xinchao Wang

CVPR 2022 (Oral Presentation, arxiv)

Logo ## Framework **Pose-triplet contains three components: estimator, imitator and hallucinator** The three components form dual-loop during the training process, complementing and strengthening one another. ![alt text](assets/dual-loop-detail-v3.jpg) ## Improvement through co-evolving Here is imitated motion of different rounds, the estimator and imitator get improved over the rounds of training, and thus the imitated motion becomes more accurate and realistic from round 1 to 3. ![alt text](assets/improvement-h123a3b/round123_wihtile_1x4.gif) ## Video demo https://user-images.githubusercontent.com/37209147/160742585-3dc9ddf9-b6e0-4ea0-be4c-df67a21ef192.mp4 ## Comparasion Here we compared our results with two recent works [Yu et al.](https://openaccess.thecvf.com/content/ICCV2021/papers/Yu_Towards_Alleviating_the_Modeling_Ambiguity_of_Unsupervised_Monocular_3D_Human_ICCV_2021_paper.pdf) and [Hu et al.](https://arxiv.org/pdf/2109.09166.pdf)
# Installation * Please refer to [`README_env.md`](./README_env.md) for the python environment setup. # Data Preparation * Please refer to [`estimator/README.md`](./estimator/README.md) for the preparation of the dataset files. # Training Please refer to [`script-summary`](./imitator/script-summary-gt2d-v5.sh) for the training process. We also provide a [checkpoint folder](https://drive.google.com/drive/folders/1iGh1Sk30Tg8-UgGXM_8KwTQSbh7jcW9Y?usp=sharing) here with better performance, which support that this framework has the potential to reach the same performance as fully-supervised approaches. \ Note: checkpoint for the RL policy is not include due to the size limitation, please following the training code to train the policy. # Inference We provide an inference code [here](https://github.com/Garfield-kh/PoseTriplet/tree/main/estimator_inference). Please follow the instruction and download the pretrained model for inference on videos. # Talk Here is a [slidestalk](https://www.slidestalk.com/m/832) ([PPT](https://drive.google.com/drive/folders/1oEJfnjR1NupC4SVo7_hk2wMu5BPsRBd2?usp=sharing) in english, speak in chinese). # Citation If you find this code useful for your research, please consider citing the following paper: ```bibtex @inproceedings{gong2022posetriplet, title = {PoseTriplet: Co-evolving 3D Human Pose Estimation, Imitation, and Hallucination under Self-supervision}, author = {Gong, Kehong and Li, Bingbing and Zhang, Jianfeng and Wang, Tao and Huang, Jing and Mi, Michael Bi and Feng, Jiashi and Wang, Xinchao}, booktitle = {CVPR}, year = {2022} } ```