NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
Jiaming Sun*, Yiming Xie*, Linghao Chen, Xiaowei Zhou, Hujun Bao
CVPR 2021 (Oral Presentation and Best Paper Candidate)
# Ubuntu 18.04 and above is recommended.
sudo apt install libsparsehash-dev # you can try to install sparsehash with conda if you don't have sudo privileges.
conda env create -f environment.yaml
conda activate neucon
Download the pretrained weights and put it under
PROJECT_PATH/checkpoints/release
.
You can also use gdown to download it in command line:
mkdir checkpoints && cd checkpoints
gdown --id 1zKuWqm9weHSm98SZKld1PbEddgLOQkQV
We provide a real-time demo of NeuralRecon running with self-captured ARKit data. Please refer to DEMO.md for details.
Download and extract ScanNet by following the instructions provided at http://www.scan-net.org/.
Next run the data preparation script which parses the raw data format into the processed pickle format. This script also generates the ground truth TSDFs using TSDF Fusion.
python main.py --cfg ./config/test.yaml
The reconstructed meshes will be saved to PROJECT_PATH/results
.
python tools/evaluation.py --model ./results/scene_scannet_release_fusion_eval_47 --n_proc 16
Note that evaluation.py
uses pyrender to render depth maps from the predicted mesh for 2D evaluation.
If you are using headless rendering you must also set the enviroment variable PYOPENGL_PLATFORM=osmesa
(see pyrender for more details).
You can print the results of a previous evaluation run using
python tools/visualize_metrics.py --model ./results/scene_scannet_release_fusion_eval_47
Start training by running ./train.sh
.
More info about training (e.g. GPU requirements, convergence time, etc.) to be added soon.
The training is seperated to two phases and the switching between phases is controlled manually for now:
Phase 1 (the first 0-20 epoch), training single fragments.
MODEL.FUSION.FUSION_ON=False, MODEL.FUSION.FULL=False
Phase 2 (the remaining 21-50 epoch), with GRUFusion
.
MODEL.FUSION.FUSION_ON=True, MODEL.FUSION.FULL=True
If you find this code useful for your research, please use the following BibTeX entry.
@article{sun2021neucon,
title={{NeuralRecon}: Real-Time Coherent {3D} Reconstruction from Monocular Video},
author={Sun, Jiaming and Xie, Yiming and Chen, Linghao and Zhou, Xiaowei and Bao, Hujun},
journal={CVPR},
year={2021}
}
We would like to specially thank Reviewer 3 for the insightful and constructive comments. We would like to thank Sida Peng , Siyu Zhang and Qi Fang for the proof-reading. Some of the code in this repo is borrowed from MVSNet_pytorch, thanks Xiaoyang!
This work is affiliated with ZJU-SenseTime Joint Lab of 3D Vision, and its intellectual property belongs to SenseTime Group Ltd.
Copyright SenseTime. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.