News
01/21/2024
We release the Mobile-Stage dataset and SyntheticHuman++ dataset.11/04/2023
The enhanced version of the paper has been accepted to T-PAMI. We update information about the journal version of the paper.05/17/2021
To make the comparison on ZJU-MoCap easier, we save quantitative and qualitative results of other methods at here, including Neural Volumes, Multi-view Neural Human Rendering, and Deferred Neural Human Rendering.05/13/2021
To make the following works easier compare with our model, we save our rendering results of ZJU-MoCap at here and write a document that describes the training and test protocols.05/12/2021
The code supports the test and visualization on unseen human poses.05/12/2021
We update the ZJU-MoCap dataset with better fitted SMPL using EasyMocap. We also release a website for visualization. Please see here for the usage of provided smpl parameters.Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans
Sida Peng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou
CVPR 2021Implicit Neural Representations with Structured Latent Codes for Human Body Modeling
Sida Peng, Chen Geng, Yuanqing Zhang, Yinghao Xu, Qianqian Wang, Qing Shuai, Hujun Bao, Xiaowei Zhou
TPAMI 2023
Any questions or discussions are welcomed!
Please see INSTALL.md for manual installation.
Please see docker/README.md.
Thanks to Zhaoyi Wan for providing the docker implementation.
Please see CUSTOM.
Please see INSTALL.md to download the dataset.
We provide the pretrained models at here.
We already provide some processed data. If you want to process more videos of People-Snapshot, you could use tools/process_snapshot.py.
You can also visualize smpl parameters of People-Snapshot with tools/vis_snapshot.py.
Take the visualization on female-3-casual
as an example. The command lines for visualization are recorded in visualize.sh.
Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/female3c/latest.pth
.
Visualization:
python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_novel_view True num_render_views 144
python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_novel_pose True
# generate meshes
python run.py --type visualize --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c vis_mesh True train.num_workers 0
# visualize a specific mesh
python tools/render_mesh.py --exp_name female3c --dataset people_snapshot --mesh_ind 226
The results of visualization are located at $ROOT/data/render/female3c
and $ROOT/data/perform/female3c
.
Take the training on female-3-casual
as an example. The command lines for training are recorded in train.sh.
# training
python train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False
# distributed training
python -m torch.distributed.launch --nproc_per_node=4 train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False gpus "0, 1, 2, 3" distributed True
# training
python train_net.py --cfg_file configs/snapshot_exp/snapshot_f3c.yaml exp_name female3c resume False white_bkgd True
tensorboard --logdir data/record/if_nerf
Please see INSTALL.md to download the dataset.
We provide the pretrained models at here.
new_params
. Currently, the released pretrained models are trained on previously fitted parameters, which locate in params
.zju_smpl/extract_vertices.py
.It is okay to train Neural Body with smpl parameters fitted by smplx.
The command lines for test are recorded in test.sh.
Take the test on sequence 313
as an example.
$ROOT/data/trained_model/if_nerf/xyzc_313/latest.pth
.python run.py --type evaluate --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313
python run.py --type evaluate --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 test_novel_pose True
Take the visualization on sequence 313
as an example. The command lines for visualization are recorded in visualize.sh.
Download the corresponding pretrained model and put it to $ROOT/data/trained_model/if_nerf/xyzc_313/latest.pth
.
Visualization:
Visualize novel views of single frame
python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_view True
Visualize novel views of single frame by rotating the SMPL model
python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_view True num_render_views 100
Visualize views of dynamic humans with fixed camera
python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_pose True num_render_frame 1000 num_render_views 1
Visualize views of dynamic humans with rotated camera
python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_novel_pose True num_render_frame 1000
Visualize mesh
# generate meshes
python run.py --type visualize --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 vis_mesh True train.num_workers 0
# visualize a specific mesh
python tools/render_mesh.py --exp_name xyzc_313 --dataset zju_mocap --mesh_ind 0
The results of visualization are located at $ROOT/data/render/xyzc_313
and $ROOT/data/perform/xyzc_313
.
Take the training on sequence 313
as an example. The command lines for training are recorded in train.sh.
# training
python train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False
# distributed training
python -m torch.distributed.launch --nproc_per_node=4 train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False gpus "0, 1, 2, 3" distributed True
# training
python train_net.py --cfg_file configs/zju_mocap_exp/latent_xyzc_313.yaml exp_name xyzc_313 resume False white_bkgd True
tensorboard --logdir data/record/if_nerf
If you find this code useful for your research, please use the following BibTeX entry.
@article{peng2023implicit,
title={Implicit Neural Representations with Structured Latent Codes for Human Body Modeling},
author={Peng, Sida and Geng, Chen and Zhang, Yuanqing and Xu, Yinghao and Wang, Qianqian and Shuai, Qing and Zhou, Xiaowei and Bao, Hujun},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2023},
publisher={IEEE}
}
@inproceedings{peng2021neural,
title={Neural Body: Implicit Neural Representations with Structured Latent Codes for Novel View Synthesis of Dynamic Humans},
author={Peng, Sida and Zhang, Yuanqing and Xu, Yinghao and Wang, Qianqian and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
booktitle={CVPR},
year={2021}
}