This is the code for the CVPR 2023 paper Skinned Motion Retargeting with Residual Perception of Motion Semantics & Geometry by Jiaxu Zhang, et al.
R2ET is a neural motion retargeting model that can preserve the source motion semantics and avoid interpenetration in the target motion.
conda create python=3.9 --name r2et
conda activate r2et
Install the packages in requirements.txt
and install PyTorch 1.10.0
pip install -r requirements.txt
Install pytorch
conda install pytorch==1.10.0 torchvision==0.11.0 torchaudio==0.10.0 cudatoolkit=10.2 -c pytorch
Training data:
Firstly, create an account on the Mixamo website.
Next, download the fbx animation files for each character folder in ./datasets/mixamo/train_char/. The animation list can be refered to NKN. we collect 1952 non-overlapping motion sequences for training.
Once the fbx files have been downloaded, run the following blender script to convert them into BVH files:
blender -b -P ./datasets/fbx2bvh.py
Finally, preprocess the bvh files into npy files by running the following command:
python ./datasets/preprocess_q.py
The shape information saved in ./datasets/mixamo/train_shape (already preprocessed) for each character's T-pose is preprocessed by:
blender -b -P ./datasets/extract_shape.py
cd ./outside-code/sdf
python setup.py install
Performing inference using bvh files:
python3 inference_bvh.py --config ./config/inference_bvh_cfg.yaml
Skeleton-aware Network:
python3 train_skeleton_aware.py --config ./config/train_skeleton_aware.yaml
Shape-aware Network:
python3 train_shape_aware.py --config ./config/train_shape_aware.yaml
The visualization parameters are in the ./visualization/options.py
cd ./visualization
blender -P visualize.py
@inproceedings{zhang2023skinned,
title={Skinned Motion Retargeting with Residual Perception of Motion Semantics \& Geometry},
author={Zhang, Jiaxu and Weng, Junwu and Kang, Di and Zhao, Fang and Huang, Shaoli and Zhe, Xuefei and Bao, Linchao and Shan, Ying and Wang, Jue and Tu, Zhigang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={13864--13872},
year={2023}
}
Thanks to PMnet, SAN and NKN, our code is partially borrowing from them.