zxz267 / AvatarJLM

[ICCV 2023] Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling
https://zxz267.github.io/AvatarJLM/
MIT License
43 stars 3 forks source link
3d-vision augmented-reality avatar computer-vision deep-learning full-body-tracking human-pose-estimation iccv iccv2023 mixed-reality motion-capture transformer virtual-reality

Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling

Xiaozheng ZhengZhuo SuChao WenZhou Xue*Xiaojie Jin
ByteDance
Equal contribution   *Corresponding author
:star_struck: Accepted to ICCV 2023
--- AvatarJLM uses tracking signals of the head and hands to estimate accurate, smooth, and plausible full-body motions. :open_book: For more visual results, go checkout our project page ---

[Project Page][arXiv]

:mega: Updates

[09/2023] Testing samples are available.

[09/2023] Training and testing codes are released.

[07/2023] AvatarJLM is accepted to ICCV 2023 :partying_face:!

:file_folder: Data Preparation

AMASS

  1. Please download the datasets from AMASS.
  2. Download the required body models and placed them in ./support_data/body_models directory of this repository. For the SMPL+H body model, download it from http://mano.is.tue.mpg.de/. Please download the AMASS version of the model with DMPL blendshapes. You can obtain dynamic shape blendshapes, e.g. DMPLs, from http://smpl.is.tue.mpg.de.
  3. Run ./data/prepare_data.py to preprocess the input data for faster training. The data split for training and testing data under Protocol 1 in our paper is stored under the folder ./data/data_split (from AvatarPoser).
    python ./data/prepare_data.py --protocol [1, 2, 3] --root [path to AMASS]

    Real-Captured Data

  4. Please download our real-captured testing data from Google Drive. The data is preprocessed to the same format as our preprocessed AMASS data.
  5. Unzip the data and place it in ./data directory of this repository.

:desktop_computer: Requirements

:bicyclist: Training

python train.py --protocol [1, 2, 3] --task [name of the experiment] 

:running_woman: Evaluation

python test.py --protocol [1, 2, 3, real] --task [name of the experiment] --checkpoint [path to trained checkpoint] [--vis]

:lollipop: Trained Model

Protocol MPJRE MPJPE MPJVE Trained Model
1 3.01 3.35 21.01 Google Drive
2-CMU-Test 5.36 7.28 26.46 Google Drive
2-BML-Test 4.65 6.22 34.45 Google Drive
2-MPI-Test 5.85 6.47 24.13 Google Drive
3 4.25 4.92 27.04 Google Drive

:love_you_gesture: Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{
  zheng2023realistic,
  title={Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling},
  author={Zheng, Xiaozheng and Zhuo Su and Wen, Chao and Xue, Zhou and Xiaojie Jin},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  year={2023}
}

:newspaper_roll: License

Distributed under the MIT License. See LICENSE for more information.

:raised_hands: Acknowledgements

This project is built on source codes shared by AvatarPoser. We thank the authors for their great job!