I found an issue while trying to use your KL-divergence loss. In loss.py, almost half of alpha values become zero due to negative raw density values.
alpha = raw2alpha(raw[...,3] + noise, dists) # [N_rays, N_samples]
This produces zero weights for such points, and ends up passing a bunch of zeros into the log function.
weights = alpha * torch.cumprod(torch.cat([torch.ones((alpha.shape[0], 1)).to(device), 1.-alpha + 1e-10], -1), -1)[:, :-1]loss = -torch.log(weights) * torch.exp(-(z_vals - depths[:,None]) ** 2 / (2 * err)) * dists
How did you deal with this problem in your experiments?
(eg. perhaps by masking out those points while computing the loss, or by using a softplus activation in raw2alpha instead of relu?)
Hi, thanks for your nice work!
I found an issue while trying to use your KL-divergence loss. In loss.py, almost half of alpha values become zero due to negative raw density values.
alpha = raw2alpha(raw[...,3] + noise, dists) # [N_rays, N_samples]
This produces zero weights for such points, and ends up passing a bunch of zeros into the log function.
weights = alpha * torch.cumprod(torch.cat([torch.ones((alpha.shape[0], 1)).to(device), 1.-alpha + 1e-10], -1), -1)[:, :-1]
loss = -torch.log(weights) * torch.exp(-(z_vals - depths[:,None]) ** 2 / (2 * err)) * dists
How did you deal with this problem in your experiments? (eg. perhaps by masking out those points while computing the loss, or by using a softplus activation in
raw2alpha
instead of relu?)Thanks in advance!