PengfeiRen96 / DIR

[ICCV 2023 Oral] Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image
https://pengfeiren96.github.io/DIR/
MIT License
69 stars 2 forks source link
3d-hand-pose-estimation 3d-hand-reconstruction iccv iccv2023 mesh-reconstruction mesh-recovery pytorch3d

Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image

Pengfei Ren1,2Chao Wen2Xiaozheng Zheng1,2Zhou Xue2
Haifeng Sun1Qi Qi1Jingyu Wang1Jianxin Liao1*
1Beijing University of Posts and Telecommunications   2PICO IDL ByteDance  
*Corresponding author
:star_struck: Accepted to ICCV 2023 as Oral
--- Our method DIR can achieve an accurate and robust reconstruction of interacting hands. :open_book: For more visual results, go checkout our project page ---

[Project Page][arXiv]

:mega: Updates

[10/2023] Released the pre-trained models 👏!

[07/2023] DIR is accepted to ICCV 2023 (Oral) :partying_face:!

:love_you_gesture: Citation

If you find our work useful for your research, please consider citing the paper:

@inproceedings{ren2023decoupled,
    title={Decoupled Iterative Refinement Framework for Interacting Hands Reconstruction from a Single RGB Image},
    author={Ren, Pengfei and Wen, Chao and Zheng, Xiaozheng and Xue, Zhou and Sun, Haifeng and Qi, Qi and Wang, Jingyu and Liao, Jianxin},
    booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    year={2023}
}

:desktop_computer: Data Preparation

  1. Download necessary assets misc.tar.gz and unzip it.
  2. Download InterHand2.6M dataset and unzip it.
  3. Process the dataset by the code provided by IntagHand
python dataset/interhand.py --data_path PATH_OF_INTERHAND2.6M --save_path ./data/interhand2.6m/

:desktop_computer: Installation

Requirements

Setup with Conda

# create conda env
conda create -n dir python=3.8
# install torch
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
# install pytorch3d
pip install fvcore iopath
pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/py38_cu113_pyt1110/download.html
# install other requirements
cd DIR
pip install -r ./requirements.txt
# install manopth
cd manopth
pip install -e .

:train: Training

python train.py  

:running_woman: Evaluation

Download the pre-trained models Google Drive

python apps/eval_interhand.py --data_path ./interhand2.6m/  --model ./checkpoint/xxx

You can use different joint id for alignment by setting root_joint (0: Wrist 9:MCP)

Set Wrist=0, you would get following output:

joint mean error:
    left: 10.732769034802914 mm, right: 9.722338989377022 mm
    all: 10.227554012089968 mm
vert mean error:
    left: 10.479239746928215 mm, right: 9.52134095132351 mm
    all: 10.000290349125862 mm
pixel joint mean error:
    left: 6.329594612121582 mm, right: 5.843323707580566 mm
    all: 6.086459159851074 mm
pixel vert mean error:
    left: 6.235759735107422 mm, right: 5.768411636352539 mm
    all: 6.0020856857299805 mm
root error: 29.26051989197731 mm

(We fixed some minor bugs and the performance is higher than the value reported in the paper)

:newspaper_roll: License

Distributed under the MIT License. See LICENSE for more information.

:raised_hands: Acknowledgements

The pytorch implementation of MANO is based on manopth. We use some parts of the great code from IntagHand. We thank the authors for their great job!