RoHM is a novel diffusion-based motion model that, conditioned on noisy and occluded input data, reconstructs complete, plausible motions in consistent global coordinates. -- we decompose it into two sub-tasks and learn two models, one for global trajectory and one for local motion. To capture the correlations between the two, we then introduce a novel conditioning module, combining it with an iterative inference scheme.
Creating a clean conda environment and install all dependencies by:
conda env create -f environment.yml
After the installation is complete, activate the conda environment by:
conda activate rohm
dataset_name
indicates the name of each subset.
It will save the processed AMASS data to datasets/AMASS_smplx_preprocessed
.
python preprocessing_amass.py --dataset_name=SUBSET_NAME --amass_root=PATH/TO/AMASS --save_root=datasets/AMASS_smplx_preprocessed
Download the following contents for PROX dataset:
cam2world
, calibration
and recordings
from official PROX datasetkeypoints_openpose
and mask_joint
from herePROX
├── cam2world
├── calibration
├── recordings
├── keypoints_openpose
├── mask_joint
Download the following contents for EgoBody dataset:
kinect_color
, data_splits.csv
, calibrations
, kinect_cam_params
, smplx_camera_wearer_*
, smplx_interactee_*
from the official EgoBody datasetkeypoints_cleaned
, mask_joint
and egobody_rohm_info.csv
from hereEgoBody
├── kinect_color
├── data_splits.csv
├── smplx_camera_wearer_train
├── smplx_camera_wearer_test
├── smplx_camera_wearer_val
├── smplx_interactee_train
├── smplx_interactee_test
├── smplx_interactee_val
├── calibrations
├── kinect_cam_params
├── keypoints_cleaned
├── mask_joint
├── egobody_rohm_info.csv
egobody_rohm_info.csv
includes information of recordings from EgoBody that we used for evaluation of RoHM.
Download SMPL-X body model from here. Note that the latest version is 1.1 while we use 1.0 in the implementation.
Download smplx vertices segmentation smplx_vert_segmentation.json
from here.
Download the model checkpoints from here. Download other processed/saved data from here and unzip, including:
init_motions
, initialized motion sequences (RoHM input) on PROX and EgoBodytest_results_release
, reconstructed motion sequences (RoHM output) on PROX and EgoBodyeval_noise_smplx
, pre-computed motion noise for RoHM evaluation on AMASSRoHM
├── data
│ ├── body_models
│ │ ├── smplx_model
│ │ │ ├── smplx
│ ├── checkpoints
│ ├── eval_noise_smplx
│ ├── init_motions
│ ├── test_results_release
│ ├── smplx_vert_segmentation.json
├── datasets
│ ├── AMASS_smplx_preprocessed
│ ├── PROX
│ ├── EgoBody
RoHM is trained on AMASS dataset.
Train the vanilla TrajNet with a curriculum training scheme for three stages, with increasing noise ratios:
python train_trajnet.py --config=cfg_files/train_cfg/trajnet_train_vanilla_stage1.yaml
python train_trajnet.py --config=cfg_files/train_cfg/trajnet_train_vanilla_stage2.yaml --pretrained_model_path=PATH/TO/MODEL
python train_trajnet.py --config=cfg_files/train_cfg/trajnet_train_vanilla_stage3.yaml --pretrained_model_path=PATH/TO/MODEL
For stage 2 and 3, set pretrained_model_path
to the trained checkpoint from the previous stage.
To obtain the reported checkpoint, we train for 800k/400k/450k steps for stage 1/2/3, respectively.
python train_trajnet.py --config=cfg_files/train_cfg/trajnet_ft_trajcontrol.yaml --pretrained_backbone_path=PATH/TO/MODEL
Set pretrained_backbone_path
to the pre-trained checkpoint of vanilla TrajNet, and we train for 400k to obtain the reported checkpoint.
Train PoseNet with a curriculum training scheme for two stages, with increasing noise ratios:
python train_posenet.py --config=cfg_files/train_cfg/posenet_train_stage1.yaml
python train_posenet.py --config=cfg_files/train_cfg/posenet_train_stage2.yaml --pretrained_model_path=PATH/TO/MODEL
For stage 2, set pretrained_model_path
to the trained checkpoint from the previous stage.
To obtain the reported checkpoint, we train for 300k/200k steps for stage 1/2, respectively.
Test on AMASS with different configurations (corresponds to Tab.1 in the paper) and save reconstructed results to test_results/results_amass_full
:
Note that running the given configurations with the same random seed cannot guarantee exactly the same number across different machines, however the stochasticity is quite small.
python test_amass_full.py --config=cfg_files/test_cfg/amass_occ_0.1_noise_3.yaml
python test_amass_full.py --config=cfg_files/test_cfg/amass_occ_leg_noise_3.yaml
python test_amass_full.py --config=cfg_files/test_cfg/amass_occ_leg_noise_5.yaml
python test_amass_full.py --config=cfg_files/test_cfg/amass_occ_leg_noise_7.yaml
Calculate the evaluation metrics and visualize/render on reconstructed results on AMASS.
python eval_amass_full.py --config=cfg_files/eval_cfg/amass_occ_0.1_noise_3.yaml --saved_data_path=PATH/TO/TEST/RESULTS
python eval_amass_full.py --config=cfg_files/eval_cfg/amass_occ_leg_noise_3.yaml --saved_data_path=PATH/TO/TEST/RESULTS
python eval_amass_full.py --config=cfg_files/eval_cfg/amass_occ_leg_noise_5.yaml --saved_data_path=PATH/TO/TEST/RESULTS
python eval_amass_full.py --config=cfg_files/eval_cfg/amass_occ_leg_noise_7.yaml --saved_data_path=PATH/TO/TEST/RESULTS
Other flags for visualization and rendering:
--visualize=True
: visualize input/output/GT motions with open3d
(with both skeletons and body meshes)--render=True
: render the input/output/GT motions with pyrender
and save rendered results to --render_save_path
Correponds to the experiment setups in Tab.2 and Tab.3 in the paper.
To obtain the initial (noisy and partially visible) motions on PROX, we use the following options:
We provide our preprocessed initial motion sequence in the folder data/init_motions
,
and the final output motion sequences from RoHM in the folder data/test_results_release
for your reference.
Note that for the following scripts, the intial motions should have z-axis up for PROX, and y-axis up for EgoBody.
test_results/results_prox_rgbd
:
python test_prox_egobody.py --config=cfg_files/test_cfg/prox_rgbd.yaml --recording_name=RECORDING_NAME
test_results/results_prox_rgb
:
python test_prox_egobody.py --config=cfg_files/test_cfg/prox_rgb.yaml --recording_name=RECORDING_NAME
test_results/results_egobody_rgb
:
python test_prox_egobody.py --config=cfg_files/test_cfg/egobody_rgb.yaml --recording_name=RECORDING_NAME
Calculate the evaluation metrics and visualize/render on reconstructed results on PROX/EgoBody.
python eval_prox_egobody.py --config=cfg_files/eval_cfg/prox_rgbd.yaml --saved_data_dir=PATH/TO/TEST/RESULTS --recording_name=RECORDING_NAME
python eval_prox_egobody.py --config=cfg_files/eval_cfg/prox_rgb.yaml --saved_data_dir=PATH/TO/TEST/RESULTS --recording_name=RECORDING_NAME
python eval_prox_egobody.py --config=cfg_files/eval_cfg/egobody_rgb.yaml --saved_data_dir=PATH/TO/TEST/RESULTS --recording_name=RECORDING_NAME
Note: recording_name
can be set to:
all
': the evaluation is done over all sequences in the subset (used to report numbers in the paper).Other flags for visualization and rendering:
--visualize=True
: visualize input/output/GT motions with open3d
--vis_option=mesh
: visualize body--vis_option=skeleton
: visualize skeleton--render=True
: render the input/output/GT motions with pyrender and save rendered results to --render_save_path
If you want to run RoHM on your customized input:
data/init_motions
datasets/PROX/mask_joint
utils/get_occlusion_mask.py
to obtain occlusion masks on PROX dataset)The majority of RoHM is licensed under CC-BY-NC (including the code, released checkpoints, released dataset for initialized / final motion sequences), however portions of the project are available under separate license terms:
If you find our work useful in your research, please consider citing:
@inproceedings{zhang2024rohm,
title={RoHM: Robust Human Motion Reconstruction via Diffusion},
author={Zhang, Siwei and Bhatnagar, Bharat Lal and Xu, Yuanlu and Winkler, Alexander and Kadlecek, Petr and Tang, Siyu and Bogo, Federica},
booktitle={CVPR},
year={2024}
}