This official repository is established for Pyramid Multi-view Stereo Net with Self-adaptive View Aggregation (ECCV2020) paper
./conda_install.sh
MVS_TRANING
folder (orrowed from MVSNet(https://raw.githubusercontent.com/YoYo000/MVSNet)).dtu_data_root
to your MVS_TRAINING
path in env.sh
Create a log folder and a model folder in wherever you like to save the training outputs. Set the log_dir
and save_dir
in train.sh
correspondingly../train.sh
TEST_DATA_FOLDER
folder, which should contain one cams
folder, one images
folder and one pair.txt
file.MODEL_FOLDER
eval_pyramid.sh
, set MODEL_FOLDER
to ckpt
and model_ckpt_index
to checkpoint_list
../eval_pyramid.sh
.depthfusion_pytorch.py
script for Fusion (from MVSNet-pytorch).use_mmp
as True
to use Multi-metric Pyramid Depth Aggregation in tools/postprocess.sh
../tools
directory, then run ./postprocess.sh
to generate final point cloud.Acc. | Comp. | Overall. | |
---|---|---|---|
MVSNet(D=256) | 0.396 | 0.527 | 0.462 |
PVAMVSNet(D=192) | 0.379 | 0.336 | 0.357 |
PVA-MVSNet point cloud results with full post-processing are also provided: DTU evaluation point clouds with extracting code zau7.
Mean | Family | Francis | Horse | Lighthouse | M60 | Panther | Playground | Train |
---|---|---|---|---|---|---|---|---|
54.46 | 69.36 | 46.80 | 46.01 | 55.74 | 57.23 | 54.75 | 56.70 | 49.06 |
Please ref to leaderboard.
If you find this project useful for your research, please cite:
@inproceedings{yi2020PVAMVSNET,
title={Pyramid multi-view stereo net with self-adaptive view aggregation},
author={Yi, Hongwei and Wei, Zizhuang and Ding, Mingyu and Zhang, Runze and Chen, Yisong and Wang, Guoping and Tai, Yu-Wing},
booktitle={ECCV},
year={2020}
}
Thanks Xiaoyang Guo for his contribution to re-implementation of MVSNet-pytorch. Thanks Yao Yao for his previous works MVSNet/R-MVSNet.