Open caligraf1 opened 3 months ago
In the paper, all synthetic NeRF scenes got trained on transforms_train.json
and then evaluated on transforms_test.json
, same as previous work.
There's no code in this repo to generate the paper figures, but you can run
./instant-ngp path-to-scene/transforms_train.json
if you want to replicate the same training setup. PSNR numbers might be slightly different from the paper because the codebase has evolved since then. You can check out the initial commit of this repo if you want a more faithful reproduction. Also note that the PSNR numbers displayed in the GUI differ slightly from the values reported in the paper -- this is because prior NeRF work has certain objectionable treatment of color spaces (linear vs. sRGB) and their combination with (non-)premultiplied alpha that INGP does not mirror. For the paper, we wrote a separate codepath that exactly replicated the PSNR comutation setup of prior work. Using --nerf_compatibility
with ./scripts/run.py
enables part of that code path.
(Note that if you run ./instant-ngp path-to-scene
it'll grab all the .json
files from the folder, which yields a better reconstruction but is not how the paper results were generated.)
What if I train on my own dataset and have just one transforms.json file? How is then the split done?
Then it's up to you to come up with a split, generate corresponding .json
files, and load only the one that you'd like to operate on.
(Note that if you run ./instant-ngp path-to-scene it'll grab all the .json files from the folder, which yields a better reconstruction but is not how the paper results were generated.)
Yes, but how is in this case the training done? Are all images used just for training (which doesn't make sense)? I've trained the network it on my dataset providing one .json file with poses for all images and calculated the accuracy metrics. How are they being calculated in such case?
Hello,
How is the train-test split done in Instant NGP? And where is it in the code?
Thank you.