When training Latent-NeRF, the SDS loss become extremely large like over thousands, sometimes turn to a million, I wonder if this is normal. However, with such a large loss, the regularization seems like nothing, when such big loss goes into optimization, will the network really converges? Did I do something wrong, please help me, REALLY appreciate it
When training Latent-NeRF, the SDS loss become extremely large like over thousands, sometimes turn to a million, I wonder if this is normal. However, with such a large loss, the regularization seems like nothing, when such big loss goes into optimization, will the network really converges? Did I do something wrong, please help me, REALLY appreciate it