Closed Holmes-Alan closed 2 years ago
I also want to know.
Hi, thank you and sorry for the delayed response!
The dataset format is the same as in https://github.com/Kai-46/nerfplusplus#data
If you have an image dataset, you would need to do the following:
data/
, e.g., data/newdataset
and create source
and out
subfolders, e.g., data/newdataset/source
, data/newdataset/out
.data/newdataset/source
.colmap_runner/run_colmap.py data/newdataset
in the root folder.data/newdataset/rgb
, and calibrate the camera parameters to data/newdataset/kai_cameras_normalized.json
.data/newdataset/rgb/*
images as the source, to filter out, e.g., people, bicycle, cars or any other dynamic objects. The method will work regardless, but this would significantly reduce visible artifacts in case these objects are present. We used this repository to generate the masks. The grayscale masks should be placed to data/newdataset/mask/
subfolder. You can use the provided datasets as reference. train
, val
, test
splits. To do so, first create corresponding subfolders: data/newdataset/{train,val,test}/rgb
. Then split the images as you like by copying them from data/newdataset/rgb
to the corresponding split's rgb
folder, e.g., data/newdataset/train/rgb/
.cvt.py
. It will automatically copy all camera parameters and masks to the split folders. At the moment this script not in the repository, but you can find it in the provided datasets, e.g., here.configs/newdataset.txt
. Then you would need to change datadir
to data
, scene
to newdataset
, and expname
in the config.python ddp_train_nerf.py --config configs/newdataset.txt
Hi, I just uploaded cvt.py
to the repo as colmap_runner/cvt.py
. Also, I updated README.md
with the instructions above for using own data. If you have any problems, please write here.
Thanks for your reply!
Excellent work! How can I use your code to train on my own dataset?