zxz267 / HaMuCo

[ICCV 2023] HaMuCo: Hand Pose Estimation via Multiview Collaborative Self-Supervised Learning
https://zxz267.github.io/HaMuCo/
MIT License
42 stars 4 forks source link
3d-hand-pose-estimation 3d-hand-reconstruction 3d-vision computer-vision deep-learning hand-pose-estimation iccv iccv2023 pytorch self-supervised weakly-supervised-learning

HaMuCo: Hand Pose Estimation via Multiview Collaborative Self-Supervised Learning

Xiaozheng Zheng1,2†Chao Wen2†Zhou Xue2Pengfei Ren1,2Jingyu Wang1*
1Beijing University of Posts and Telecommunications   2PICO IDL ByteDance  
Equal contribution   *Corresponding author
:star_struck: Accepted to ICCV 2023
--- HaMuCo is a multi-view self-supervised 3D hand pose estimation method that only requires 2D pseudo labels for training. ---

[Project Page][arXiv]

:black_square_button: TODO

:mega: Updates

[07/2023] HaMuCo is accepted to ICCV 2023 :partying_face:!

[01/2023] Training and evaluation codes on HanCo are released.

:file_folder: Data Preparation

1. Download the HanCo dataset from the official website.

:desktop_computer: Installation

Requirements

Setup with Conda

conda create -n hamuco python=3.7
pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
cd HaMuCo
pip install -r ./requirements.txt

:running_woman: Training

1. Run ./train.py to train and evaluate on the HanCo dataset.

:love_you_gesture: Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{
  zheng2023hamuco,
  title={HaMuCo: Hand Pose Estimation via Multiview Collaborative Self-Supervised Learning},
  author={Zheng, Xiaozheng and Wen, Chao and Xue, Zhou and Ren, Pengfei and Wang, Jingyu},
  booktitle={Proceedings of the IEEE/CVF international conference on computer vision},
  year={2023}
}

:newspaper_roll: License

Distributed under the MIT License. See LICENSE for more information.

:raised_hands: Acknowledgements

The pytorch implementation of MANO is based on manopth. We thank the authors for their great job!