dunbar12138 / DSNeRF

Code release for DS-NeRF (Depth-supervised Neural Radiance Fields)
https://www.cs.cmu.edu/~dsnerf/
MIT License
746 stars 126 forks source link

Question about SigmaLoss with DSNeRF paper. #74

Open jhq1234 opened 1 year ago

jhq1234 commented 1 year ago

Hi, Thank you for sharing your great code. I'm glad to learn about DSNeRF with your kind and detailed explanation

I have a question about sigmaLoss with paper.

I guess sigma loss is the ray distribution loss in the paper notation by h(t). From the paper, h(t) = T(t)*σ(t) is defined. But the code in class SigmaLoss from loss.py, I see

        # sigma = F.relu(raw[...,3] + noise)
        alpha = raw2alpha(raw[...,3] + noise, dists)  # [N_rays, N_samples]
        weights = alpha * torch.cumprod(torch.cat([torch.ones((alpha.shape[0], 1)).to(device), 1.-alpha + 1e-10], -1), -1)[:, :-1]

        loss = -torch.log(weights) * torch.exp(-(z_vals - depths[:,None]) ** 2 / (2 * err)) * dists
        loss = torch.sum(loss, dim=1)

        return loss

loss = -torch.log(weights) * torch.exp(-(z_vals - depths[:,None]) ** 2 / (2 * err)) * dists Here, the loss function is defined by weights and $T_i$. I think that from the paper's h(t) definition, the loss should be defined by loss = torch.log(h) * torch.exp(-(z_vals - depths[:,None]) ** 2 / (2 * err)) * dists where h = sigma * torch.cumprod(torch.cat([torch.ones((alpha.shape[0], 1)).to(device), 1.-alpha + 1e-10], -1), -1)[:, :-1] But, the actual code is not. Can I guess why this code is different from the DSNeRF paper? Additionally, can I ask why the '-' term is added in front of torch.log(weights)?

Thanks

dunbar12138 commented 1 year ago

Hi, thanks for your interest!

In the continuous case, $$h(t) = T(t)\sigma(t)$$ as $$Color = \int h(t) c(t) dt$$.

While in the discrete case, $$h_t = T_t \alpha_t$$ as $$Color = \sum h_t c_t$$.

More details could be found in the original NeRF paper.

jhq1234 commented 1 year ago

@dunbar12138 Thanks! I can understand why $h_t = T_rα_t.$ Additionally, can I ask one more thing?

From the paper, depth loss is defined by $\sum_k \log h_k \exp \left(-\frac{\left(tk-\mathbf{D}{i j}\right)^2}{2 \hat{\sigma}_i^2}\right) \Delta t_k$.

but in code, '-' term is added in front of torch.log(weights) like $\sum_k -log h_k \exp \left(-\frac{\left(tk-\mathbf{D}{i j}\right)^2}{2 \hat{\sigma}_i^2}\right) \Delta t_k$.

I can't understand this code. why did this happen?

YZsZY commented 1 year ago

Hi, I would like to ask a question. According to the KL scatter formula, it should be the following equation, but there is a missing item in the paper 79f511a4d0f8b0b3c285b4ea95e7f6f

Can you tell me why this is so? Thanks a lot!

dunbar12138 commented 1 year ago
Screen Shot 2022-12-19 at 16 51 16

Hope this slide helps!

KaziiBotashev commented 1 year ago

@dunbar12138 Thanks! I can understand why ht=Trαt. Additionally, can I ask one more thing?

From the paper, depth loss is defined by ∑klog⁡hkexp⁡(−(tk−Dij)22σ^i2)Δtk.

but in code, '-' term is added in front of torch.log(weights) like ∑k−loghkexp⁡(−(tk−Dij)22σ^i2)Δtk.

I can't understand this code. why did this happen?

@dunbar12138 Can you answer on this question too?

dunbar12138 commented 1 year ago

Hi, sorry for the confusion. It is actually a typo in the paper. There should be a '-' in front of the log-likelihood since we want to maximize it.