r00tman / NeRF-OSR

NeRF for Outdoor Scene Relighting [ECCV 2022]
https://4dqv.mpi-inf.mpg.de/NeRF-OSR/
235 stars 16 forks source link

how to train on my own dataset? #4

Closed Holmes-Alan closed 2 years ago

Holmes-Alan commented 2 years ago

Excellent work! How can I use your code to train on my own dataset?

wangmingyang4 commented 2 years ago

I also want to know.

r00tman commented 2 years ago

Hi, thank you and sorry for the delayed response!

The dataset format is the same as in https://github.com/Kai-46/nerfplusplus#data

If you have an image dataset, you would need to do the following:

  1. Set the path to your colmap binary in colmap_runner/run_colmap.py:13.
  2. Create a dataset directory in data/, e.g., data/newdataset and create source and out subfolders, e.g., data/newdataset/source, data/newdataset/out.
  3. Copy all the images to data/newdataset/source.
  4. Run colmap_runner/run_colmap.py data/newdataset in the root folder.
  5. This will set the data up, undistort images to data/newdataset/rgb, and calibrate the camera parameters to data/newdataset/kai_cameras_normalized.json.
  6. Optionally, you can now generate the masks by using data/newdataset/rgb/* images as the source, to filter out, e.g., people, bicycle, cars or any other dynamic objects. The method will work regardless, but this would significantly reduce visible artifacts in case these objects are present. We used this repository to generate the masks. The grayscale masks should be placed to data/newdataset/mask/ subfolder. You can use the provided datasets as reference.
  7. Now that we have all data and calibrations, we need to create train, val, test splits. To do so, first create corresponding subfolders: data/newdataset/{train,val,test}/rgb. Then split the images as you like by copying them from data/newdataset/rgb to the corresponding split's rgb folder, e.g., data/newdataset/train/rgb/.
  8. Now you want to generate camera parameters for splits by running cvt.py. It will automatically copy all camera parameters and masks to the split folders. At the moment this script not in the repository, but you can find it in the provided datasets, e.g., here.
  9. The dataset folder is ready. Now you need to create the dataset config. You can copy the config from the provided dataset, e.g., here, to configs/newdataset.txt. Then you would need to change datadir to data, scene to newdataset, and expname in the config.
  10. Now you can launch the training by python ddp_train_nerf.py --config configs/newdataset.txt
r00tman commented 2 years ago

Hi, I just uploaded cvt.py to the repo as colmap_runner/cvt.py. Also, I updated README.md with the instructions above for using own data. If you have any problems, please write here.

wangmingyang4 commented 2 years ago

Thanks for your reply!