I have a bunch of images taken from our scanner. I was really excited to try training a NERF with one of our datasets. For reference, here is what one of our training images looks like:
(image removed)
All of our images have this uniform, gray-ish background behind the subject. I'm not sure if this is relevant, but I did read one related issue where a user talked about the effects of uniform / textured backgrounds. Also, we are using our own camera poses (since we calibrate each of the cameras in our scanning rig).
Because the cameras are facing inwards, I've followed the advice in the README and trained with the following flags:
--no_ndc
--spherify
--lindisp
However, the results (even after 200k iterations) still exhibit significant visual artifacts, even though the loss seems to steadily decreases throughout training. The subject itself seems to be poorly converged, but also, there are large "cloud"-like artifacts that seem to pass in front of the virtual camera (see the images below for reference). My initial assumption was that these issues stemmed from an improper setting of our near / far camera planes, but adjusting those values (as mentioned below) doesn't seem to help.
(image removed)
Some other things I have tried include:
Adding 1-5k --precrop_iters
Adjusting our near / far clipping planes
Increasing N_rand, N_samples and N_importance to match the values used in the paper configs
However, nothing seems to improve the visual quality of the final renders. I'm wondering if anyone has any ideas / suggestions or other things we can try out?
Thanks so much for making this repo available! It's been really fun to play with.
Hi,
I have a bunch of images taken from our scanner. I was really excited to try training a NERF with one of our datasets. For reference, here is what one of our training images looks like:
(image removed)
All of our images have this uniform, gray-ish background behind the subject. I'm not sure if this is relevant, but I did read one related issue where a user talked about the effects of uniform / textured backgrounds. Also, we are using our own camera poses (since we calibrate each of the cameras in our scanning rig).
Because the cameras are facing inwards, I've followed the advice in the README and trained with the following flags:
--no_ndc
--spherify
--lindisp
However, the results (even after 200k iterations) still exhibit significant visual artifacts, even though the loss seems to steadily decreases throughout training. The subject itself seems to be poorly converged, but also, there are large "cloud"-like artifacts that seem to pass in front of the virtual camera (see the images below for reference). My initial assumption was that these issues stemmed from an improper setting of our near / far camera planes, but adjusting those values (as mentioned below) doesn't seem to help.
(image removed)
Some other things I have tried include:
--precrop_iters
N_rand
,N_samples
andN_importance
to match the values used in the paper configsHowever, nothing seems to improve the visual quality of the final renders. I'm wondering if anyone has any ideas / suggestions or other things we can try out?
Thanks so much for making this repo available! It's been really fun to play with.