Open MultiPath opened 2 years ago
Yes, we have separate implementations for training and for inference. Some optimizations are only possible for inference. For instance, during training, we need to store intermediate values (neuron activations, etc.) for backpropagation, which can be avoided during inference. Both implementations produce almost exactly the same results. The PSNR difference should be in the 4-th decimal place. I think you have an incorrect configuration, that causes these problems.
Can you you create a file cfgs/render/slow_all.yaml
with this content
performance_monitoring: False
render_only: True
testskip: 1
and then run
python run_nerf.py cfgs/paper/finetune/Synthetic_NeRF_Lego.yaml -rcfg cfgs/render/slow_all.yaml
With this configuration the slower training implementation is used for rendering the test set. Let me know if there are still any problems.
I found using "render()" function is much slower and produces much worse results than enabling "fast sampling". What is the difference? Why training is based on "render"?
Thanks
![image](https://user-images.githubusercontent.com/5780274/140596219-bf0df7ec-7118-4872-8e96-8a0971901710.png)