by Haozhe Qi, Chen Zhao, Mathieu Salzmann, Alexander Mathis, EPFL (Switzerland).
News:
Clone the Current Repo
git clone git@github.com:amathislab/HOISDF.git
Setup the conda environment
conda create --name hoisdf python=3.9
conda activate hoisdf
# install the pytorch version compatible with the your cuda version
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116
pip install -r requirements.txt
Download MANO model files (MANO_LEFT.pkl
and MANO_RIGHT.pkl
) from the website and place them in the tool/mano_models
folder.
Download the YCB models from here and set object_models_dir
in config.py
to point to the dataset folder. The original mesh models are large and have different vertices for different objects. To enable batched inference, we additionally use simplified object models with 1000 vertices. Download the simplified models from here and set simple_object_models_dir
in config.py
to point to the dataset folder
Download the processed annotation files for both datasets from here and set annotation_dir
in config.py
to point to the processed data folder.
Depending on the dataset you intend to train/evaluate follow the instructions below for the setup.
ho3d_data_dir
in config.py
to point to the dataset folder.fast_data_dir
in config.py
to point to the processed SDF folder.
fast_data_dir
folder.dexycb_data_dir
in config.py
to point to the dataset folder.fast_data_dir
in config.py
to point to the processed SDF folder.
tool/pre_process_sdf.py
script to process the SDF data.Depending on the dataset you intend to evaluate follow the instructions below. To test the model with our trained weights, you can download the weights from the links provided here and put them in the ckpts
folder.
config.py
, modify setting
parameter.
setting = 'ho3d'
for evaluating the model only trained on the HO3Dv2 training set.setting = 'ho3d_render'
for evaluating the model also trained on the rendered data.python main/test.py --ckpt_path ckpts/ho3d/snapshot_ho3d.pth.tar # for ho3d setting
python main/test.py --ckpt_path ckpts/ho3d_render/snapshot_ho3d_render.pth.tar # for ho3d_render setting
results.txt
file in the folder containing the checkpoint.pred_mano.json
file which can be submitted to the HO-3D (v2) challenge after zipping the file.config.py
, modify setting
parameter.
setting = 'dexycb'
for evaluating the model only trained on the DexYCB split, which only includes the right hand data.setting = 'dexycb_full'
for evaluating the model trained on the DexYCB Full split, which includes both the right and left hands data.python main/test.py --ckpt_path ckpts/dexycb/snapshot_dexycb.pth.tar # for dexycb setting
python main/test.py --ckpt_path ckpts/dexycb_full/snapshot_dexycb_full.pth.tar # for dexycb_full setting
results.txt
file in the folder containing the checkpoint.dexycb_full
setting, additional hand mesh results are shown in the results.txt
file (Table 3 in the paper).Depending on the dataset you intend to train follow the instructions below.
output_dir
in config.py
to point to the directory where the checkpoints will be saved.config.py
, modify setting
parameter.
setting = 'ho3d'
for training the model on the HO3Dv2 training set.setting = 'ho3d_render'
for training the model also on the rendered data.setting = 'dexycb'
for training the model on the DexYCB split, which only includes the right hand data..setting = 'dexycb_full'
for training the model on the DexYCB Full split, which includes both the right and left hands data.CUDA_VISIBLE_DEVICES
and --gpu
to the desired GPU ids. Here is an example command for training on two GPUs:
CUDA_VISIBLE_DEVICES=0,1 python main/train.py --run_dir_name test --gpu 0,1
--continue
argument in the above command.If you find our code or ideas useful, please cite:
@inproceedings{qi2024hoisdf,
title={HOISDF: Constraining 3D Hand-Object Pose Estimation with Global Signed Distance Fields},
author={Qi, Haozhe and Zhao, Chen and Salzmann, Mathieu and Mathis, Alexander},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={10392--10402},
year={2024}
}
Link to CVPR article: HOISDF: Constraining 3D Hand-Object Pose Estimation with Global Signed Distance Fields