dunbar12138 / DSNeRF

Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
https://www.cs.cmu.edu/~dsnerf/
MIT License
746 stars 126 forks source link

Zero weights appear while computing SigmaLoss #69

Closed woominsong closed 1 year ago

woominsong commented 1 year ago

Hi, thanks for your nice work!

I found an issue while trying to use your KL-divergence loss. In loss.py, almost half of alpha values become zero due to negative raw density values. alpha = raw2alpha(raw[...,3] + noise, dists) # [N_rays, N_samples]

This produces zero weights for such points, and ends up passing a bunch of zeros into the log function. weights = alpha * torch.cumprod(torch.cat([torch.ones((alpha.shape[0], 1)).to(device), 1.-alpha + 1e-10], -1), -1)[:, :-1] loss = -torch.log(weights) * torch.exp(-(z_vals - depths[:,None]) ** 2 / (2 * err)) * dists

How did you deal with this problem in your experiments? (eg. perhaps by masking out those points while computing the loss, or by using a softplus activation in raw2alpha instead of relu?)

Thanks in advance!

dunbar12138 commented 1 year ago

Hi, thanks for your interest!

We added a small eps (~1e-5) to weights to prevent NaN.

woominsong commented 1 year ago

Thanks a lot for your help!