zju3dv / mlp_maps

Code for "Representing Volumetric Videos as Dynamic MLP Maps" CVPR 2023
Other
232 stars 10 forks source link

Representing Volumetric Videos as Dynamic MLP Maps

Project Page | Video | Paper | Data

teaser

Representing Volumetric Videos as Dynamic MLP Maps
Sida Peng, Yunzhi Yan, Qing Shuai, Hujun Bao, Xiaowei Zhou (* equal contribution)
CVPR 2023

Any questions or discussions are welcomed!

Installation

Please see INSTALL.md for manual installation.

Interactive demo

Interactive rendering on ZJU-MoCap Please see [INSTALL.md](INSTALL.md) to download the dataset. We provide the pretrained models at [here](https://drive.google.com/drive/folders/1ZRgoBijRRK2ML09P7VPJUXtBHgTSWD4D?usp=sharing). Take the rendering on `sequence 313` as an example. 1. Download the corresponding pretrained model and put it to `$ROOT/data/trained_model/zjumocap/313/final.pth`. 2. Interactive rendering demo: ``` python gui.py --config configs/zjumocap/dymap_313.py fast_render True ```
Interactive rendering on NHR Please see [INSTALL.md](INSTALL.md) to download the dataset. We provide the pretrained models at [here](https://drive.google.com/drive/folders/1ZRgoBijRRK2ML09P7VPJUXtBHgTSWD4D?usp=sharing). Take the rendering on `sequence sport1` as an example. 1. Download the corresponding pretrained model and put it to `$ROOT/data/trained_model/nhr/sport1/final.pth`. 2. Interactive rendering demo: ``` python gui.py --config configs/nhr/sport1.py fast_render True ```

Run the code on ZJU-MoCap

Please see INSTALL.md to download the dataset.

We provide the pretrained models at here.

Test on ZJU-MoCap Take the test on `sequence 313` as an example. 1. Download the corresponding pretrained model and put it to `$ROOT/data/trained_model/zjumocap/313/final.pth`. 2. Test on unseen views: ``` python run.py --config configs/zjumocap/dymap_313.py mode evaluate fast_render True ```
Visualization on ZJU-MoCap Take the visualization on `sequence 313` as an example. 1. Download the corresponding pretrained model and put it to `$ROOT/data/trained_model/zjumocap/313`. 2. Visualization: * Visualize free-viewpoint videos ``` python run.py --config configs/zjumocap/dymap_313.py mode visualize vis_novel_view True fast_render True ``` ![free-viewpoint video](images/313-video.rgb.gif) * Visualize novel views of single frame ``` python run.py --config configs/zjumocap/dymap_313.py mode visualize vis_novel_view True fixed_time True fast_render True ``` ![novel_view](images/313-video_fixed_time.rgb.gif) * Visualize the dynamic scene with fixed camera ``` python run.py --config configs/zjumocap/dymap_313.py mode visualize vis_novel_view True fixed_view True fast_render True ``` ![time](images/313-video_fixed_view.rgb.gif) * Visualize mesh ``` python run.py --config configs/zjumocap/dymap_313.py mode visualize vis_mesh True fast_render True ```
Training on ZJU-MoCap Take the training on `sequence 313` as an example. 1. Train: ``` # training python train_net.py --config configs/zjumocap/dymap_313.py # distributed training python -m torch.distributed.launch --nproc_per_node=4 train_net.py --config configs/zjumocap/dymap_313.py ``` 2. Post-process the trained model: ``` python run.py --config configs/zjumocap/dymap_313.py mode visualize occ_grid True ``` 3. Tensorboard: ``` tensorboard --logdir data/record/zjumocap ```

Run the code on NHR

Please see INSTALL.md to download the dataset.

We provide the pretrained models at here.

Test on NHR Take the test on `sequence sport1` as an example. 1. Download the corresponding pretrained model and put it to `$ROOT/data/trained_model/nhr/sport1/final.pth`. 2. Test on unseen views: ``` python run.py --config configs/nhr/sport1.py mode evaluate fast_render True ```
Visualization on NHR Take the visualization on `sequence sport1` as an example. 1. Download the corresponding pretrained model and put it to `$ROOT/data/trained_model/nhr/sport1`. 2. Visualization: * Visualize novel views ``` python run.py --config configs/nhr/sport1.py mode visualize vis_novel_view True fast_render True ``` ![free-viewpoint video](images/nhr-video.rgb.gif) * Visualize novel views of single frame ``` python run.py --config configs/nhr/sport1.py mode visualize vis_novel_view True fixed_time True fast_render True ``` ![novel_view](images/nhr-video_fixed_time.rgb.gif) * Visualize the dynamic scene with fixed camera ``` python run.py --config configs/nhr/sport1.py mode visualize vis_novel_view True fixed_view True fast_render True ``` ![time](images/nhr-video_fixed_view.rgb.gif) * Visualize mesh ``` python run.py --config configs/nhr/sport1.py mode visualize vis_mesh True fast_render True ```
Training on NHR Take the training on `sequence sport1` as an example. 1. Train: ``` # training python train_net.py --config configs/nhr/sport1.py # distributed training python -m torch.distributed.launch --nproc_per_node=4 train_net.py --config configs/nhr/sport1.py ``` 2. Post-process the trained model: ``` python run.py --config configs/nhr/sport1.py mode visualize occ_grid True ``` 3. Tensorboard: ``` tensorboard --logdir data/record/nhr ```

Citation

If you find this code useful for your research, please use the following BibTeX entry.

@inproceedings{peng2023representing,
  title={Representing Volumetric Videos as Dynamic MLP Maps},
  author={Peng, Sida and Yan, Yunzhi and Shuai, Qing and Bao, Hujun and Zhou, Xiaowei},
  booktitle={CVPR},
  year={2023}
}