Closed shaonian123daly closed 3 months ago
Hello, thanks for your interest in our work.
For LLFF dataset, you can refer to scripts/mvsgs/llff_ft.sh
L2 data_dir
represents the location of LLFF dataset.
L3 dir_ply
represents the save directory of point clouds.
L7 python run.py --type evaluate --cfg_file configs/mvsgs/llff_eval.yaml save_ply True dir_ply $dir_ply
can generate point clouds for all scenes in LLFF dataset.
L9-14 is per-scene optimization by using the above point cloud as initialization.
If you just want to obtain the point cloud, you can simply run:
python run.py --type evaluate --cfg_file configs/mvsgs/llff_eval.yaml save_ply True dir_ply mvsgs_pointcloud test_dataset.data_root <path to LLFF>
dir_ply
represents the save directory of point clouds, and test_dataset.data_root
represents the location of LLFF dataset. You can also specify the data set directory in the config file.
It is similar for TNT or any other data set.
How to use pre-trained models to obtain point clouds of data sets such as TNT and LLFF? Where should these data sets be placed?