The code for CVPR 2024 paper "Closely Interactive Human Reconstruction with Proxemics and Physics-Guided Adaption"
Buzhen Huang, Chen Li, Chongyang Xu, Liang Pan, Yangang Wang, Gim Hee Lee
[Paper]
Create conda environment and install dependencies.
conda create -n closeint python=3.8
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
You may also require BVH_CUDA for collision detection.
Download the official SMPL model from SMPL and SMPLify website and place them in data/smpl
.
Step1:
Following other video-based methods, we extract image features as network input for training efficiency. The preprocessed annotations can be obtained from Baidu Netdisk or Google Drive. You should also download original images from Hi4D website. The folder structure is shown as follows:
./data
├── dataset
│ ├── Hi4D
│ │ ├── annot
│ │ └── images
├── smpl
│ ├── SMPL_MALE.pkl
│ └── SMPL_FEMALE.pkl
│ └── SMPL_NEUTRAL.pkl
├── checkpoint.pkl
Step2:
python train.py --config cfg_files/config.yaml
Step1:
Download checkpoint file from Baidu Netdisk or Google Drive. You may also require SDF loss for evaluating penetration.
Step2:
python eval.py --config cfg_files/eval.yaml
If you find this code useful for your research, please consider citing the paper.
@inproceedings{huanginteraction,
title={Closely Interactive Human Reconstruction with Proxemics and Physics-Guided Adaption},
author={Huang, Buzhen and Li, Chen and Xu, Chongyang and Pan, Liang and Wang, Yangang and Lee, Gim Hee},
booktitle={IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2024}
}
Some of the code are based on the following works. We gratefully appreciate the impact it has on our work.
InterGen
SMPLX