I have trained an inward-facing scene of a can using 120 posed images from the YCB dataset in an equally spaced upper semi-sphere and used the 'nerf' config e.g.
You should enable the background model or the background pixel will collapse to the foreground. Please see configs/nerf_unbounded/nerf_unbounded_default.py for more detail
Hi,
I have trained an inward-facing scene of a can using 120 posed images from the YCB dataset in an equally spaced upper semi-sphere and used the 'nerf' config e.g.
Training images
Test renderings
Rendering the training poses works well, but the test poses have lots of distortion:
Coarse trained model
Viewing the coarse model, it looks like it is dense in a semi-spherical area around the can (essentially where all the cameras are pointing):
Is this normal? Is it likely to be the problem? How can this be fixed?
Thanks!