This repo is official PyTorch implementation of Bringing Inputs to Shared Domains for 3D Interacting Hands Recovery in the Wild (CVPR 2023).
human_model_files
folder following below Directory
part and place it at common/utils/human_model_files
.demo
folder.images
. The image should be a cropped image, which contain a single human. For example, using a human detector. We have a hand detection network, so no worry about the hand postiions!python demo.py --gpu $GPU_ID
boxes
, meshes
, params
, and renders
, respectively.The ${ROOT}
is described as below.
${ROOT}
|-- data
|-- demo
|-- common
|-- main
|-- output
data
contains data loading codes and soft links to images and annotations directories.demo
contains the demo codecommon
contains kernel code. You should put MANO_RIGHT.pkl
and MANO_LEFT.pkl
at common/utils/human_model_files/mano
, where those are available in here.main
contains high-level codes for training or testing the network.output
contains log, trained models, visualized outputs, and test result.You need to follow directory structure of the data
as below.
${ROOT}
|-- data
| |-- InterHand26M
| | |-- annotations
| | | |-- train
| | | |-- test
| | |-- images
| |-- MSCOCO
| | |-- annotations
| | | |-- coco_wholebody_train_v1.0.json
| | | |-- coco_wholebody_val_v1.0.json
| | | |-- MSCOCO_train_MANO_NeuralAnnot.json
| | |-- images
| | | |-- train2017
| | | |-- val2017
| |-- HIC
| | |-- data
| | | |-- HIC.json
| |-- ReInterHand
| | |-- data
| | | |-- m--*
images
contains images in 5 fps, and annotations
contains the H+M
subset.MSCOCO_train_MANO_NeuralAnnot.json
can be downloaded from [here].Hand-Hand Interaction
sequences (01.zip
-14.zip
) and 2) some of Hand-Object Interaction
seuqneces (15.zip
-21.zip
) and 3) MANO fits. Or you can simply run python download.py
in the data/HIC
folder.data/ReInterHand/data
.You need to follow the directory structure of the output
folder as below.
${ROOT}
|-- output
| |-- log
| |-- model_dump
| |-- result
| |-- vis
log
folder contains training log file.model_dump
folder contains saved checkpoints for each epoch.result
folder contains final estimation files generated in the testing stage.vis
folder contains visualized results.human_model_files
folder following above Directory
part and place it at common/utils/human_model_files
.In the main
folder, run
python train.py --gpu 0-3
to train the network on the GPU 0,1,2,3. --gpu 0,1,2,3
can be used instead of --gpu 0-3
. If you want to continue experiment, run use --continue
.
output/model_dump
.human_annot
subset of interHand2.6M using data/InterHand26M/aid_human_annot_test.txt
.In the main
folder, run
python test.py --gpu 0-3 --test_epoch 6
to test the network on the GPU 0,1,2,3 with snapshot_6.pth
. --gpu 0,1,2,3
can be used instead of --gpu 0-3
.
@inproceedings{moon2023interwild,
author = {Moon, Gyeongsik},
title = {Bringing Inputs to Shared Domains for {3D} Interacting Hands Recovery in the Wild},
booktitle = {CVPR},
year = {2023}
}
@inproceedings{moon2023reinterhand,
title = {A Dataset of Relighted {3D} Interacting Hands},
author = {Moon, Gyeongsik and Saito, Shunsuke and Xu, Weipeng and Joshi, Rohan and Buffalini, Julia and Bellan, Harley and Rosen, Nicholas and Richardson, Jesse and Mize Mallorie and Bree, Philippe and Simon, Tomas and Peng, Bo and Garg, Shubham and McPhail, Kevyn and Shiratori, Takaaki},
booktitle = {NeurIPS Track on Datasets and Benchmarks},
year = {2023},
}
This repo is CC-BY-NC 4.0 licensed, as found in the LICENSE file.