moonbow721 / DPoser

Official implementation of the paper "DPoser: Diffusion Model as Robust 3D Human Pose Prior"
MIT License
50 stars 2 forks source link

🌟 DPoser: Diffusion Model as Robust 3D Human Pose Prior 🌟

πŸ”— Project Page | πŸŽ₯ Video | πŸ“„ Paper

Authors

Junzhe Lu, Jing Lin, Hongkun Dou, Ailing Zeng, Yue Deng, Yulun Zhang, Haoqian Wang


πŸ“Š An overview of DPoser’s versatility and performance across multiple pose-related tasks

πŸ“˜ 1. Introduction

Welcome to the official implementation of DPoser: Diffusion Model as Robust 3D Human Pose Prior. πŸš€
In this repository, we're excited to introduce DPoser, a robust 3D human pose prior leveraging diffusion models. DPoser is designed to enhance various pose-centric applications like human mesh recovery, pose completion, and motion denoising. Let's dive in!

πŸ”„ Switch to the 'v2' Branch for Enhanced Features!

We highly recommend switching to the 'v2' branch, which supports additional pose priors and features cleaner, more structured code. To switch, use the following command:

git checkout v2

πŸ› οΈ 2. Setup Your Environment

πŸš€ 3. Quick Demo

🎭 Pose Generation

Generate poses and save rendered images:

  python -m run.demo --config configs/subvp/amass_scorefc_continuous.py  --task generation

For videos of the generation process:

  python -m run.demo --config configs/subvp/amass_scorefc_continuous.py  --task generation_process

🧩 Pose Completion

Complete poses and view results:

  python -m run.demo --config configs/subvp/amass_scorefc_continuous.py  --task completion --hypo 10 --part right_arm --view right_half

Explore other solvers like ScoreSDE for our DPoser prior:

  python -m run.demo --config configs/subvp/amass_scorefc_continuous.py  --task completion2 --hypo 10 --part right_arm --view right_half

πŸŒͺ️ Motion Denoising

Summarize visual results in a video:

  python -m run.motion_denoising --config configs/subvp/amass_scorefc_continuous.py --file-path ./examples/Gestures_3_poses_batch005.npz --noise-std 0.04

πŸ•Ί Human Mesh Recovery

Use the detected 2D keypoints from openpose and save fitting results:

  python -m run.demo_fit --img=./examples/image_00077.jpg --openpose=./examples/image_00077_keypoints.json

πŸ§‘β€πŸ”¬ 4. Train DPoser Yourself

Dataset Preparation

To train DPoser, we use the AMASS dataset. You have two options for dataset preparation:

πŸ‹οΈβ€β™‚οΈ Start Training

After setting up your dataset, begin training DPoser:

  python -m run.train --config configs/subvp/amass_scorefc_continuous.py --name reproduce

This command will start the training process. The checkpoints, TensorBoard logs, and validation visualization results will be stored under ./output/amass_amass.

πŸ§ͺ 5. Test DPoser

Pose Generation

Quantitatively evaluate 500 generated samples using this script:

  python -m run.demo --config configs/subvp/amass_scorefc_continuous.py  --task generation --metrics

This will use the SMPL body model to evaluate APD and SI following Pose-NDF.

Pose Completion

For testing on the AMASS dataset (make sure you've completed the dataset preparation in Step 4):

  python -m run.completion --config configs/subvp/amass_scorefc_continuous.py --gpus 1 --hypo 10 --sample 10 --part legs

Motion Denoising

To evaluate motion denoising on the AMASS dataset, use the following steps:

Human Mesh Recovery

To test on the EHF dataset, follow these steps:

❓ Troubleshoots

πŸ™ Acknowledgement

Big thanks to ScoreSDE, GFPose, and Hand4Whole for their foundational work and code.

πŸ“š Reference

@article{lu2023dposer,
  title={DPoser: Diffusion Model as Robust 3D Human Pose Prior},
  author={Lu, Junzhe and Lin, Jing and Dou, Hongkun and Zhang, Yulun and Deng, Yue and Wang, Haoqian},
  journal={arXiv preprint arXiv:2312.05541},
  year={2023}
}