Open jhq1234 opened 1 year ago
Hi, thanks for your interest!
In the continuous case, $$h(t) = T(t)\sigma(t)$$ as $$Color = \int h(t) c(t) dt$$.
While in the discrete case, $$h_t = T_t \alpha_t$$ as $$Color = \sum h_t c_t$$.
More details could be found in the original NeRF paper.
@dunbar12138 Thanks! I can understand why $h_t = T_rα_t.$ Additionally, can I ask one more thing?
From the paper, depth loss is defined by $\sum_k \log h_k \exp \left(-\frac{\left(tk-\mathbf{D}{i j}\right)^2}{2 \hat{\sigma}_i^2}\right) \Delta t_k$.
but in code, '-' term is added in front of torch.log(weights) like $\sum_k -log h_k \exp \left(-\frac{\left(tk-\mathbf{D}{i j}\right)^2}{2 \hat{\sigma}_i^2}\right) \Delta t_k$.
I can't understand this code. why did this happen?
Hi, I would like to ask a question. According to the KL scatter formula, it should be the following equation, but there is a missing item in the paper
Can you tell me why this is so? Thanks a lot!
Hope this slide helps!
@dunbar12138 Thanks! I can understand why ht=Trαt. Additionally, can I ask one more thing?
From the paper, depth loss is defined by ∑kloghkexp(−(tk−Dij)22σ^i2)Δtk.
but in code, '-' term is added in front of torch.log(weights) like ∑k−loghkexp(−(tk−Dij)22σ^i2)Δtk.
I can't understand this code. why did this happen?
@dunbar12138 Can you answer on this question too?
Hi, sorry for the confusion. It is actually a typo in the paper. There should be a '-' in front of the log-likelihood since we want to maximize it.
Hi, Thank you for sharing your great code. I'm glad to learn about DSNeRF with your kind and detailed explanation
I have a question about sigmaLoss with paper.
I guess sigma loss is the ray distribution loss in the paper notation by h(t). From the paper, h(t) = T(t)*σ(t) is defined. But the code in class SigmaLoss from loss.py, I see
loss = -torch.log(weights) * torch.exp(-(z_vals - depths[:,None]) ** 2 / (2 * err)) * dists
Here, the loss function is defined by weights and $T_i$. I think that from the paper's h(t) definition, the loss should be defined byloss = torch.log(h) * torch.exp(-(z_vals - depths[:,None]) ** 2 / (2 * err)) * dists
whereh = sigma * torch.cumprod(torch.cat([torch.ones((alpha.shape[0], 1)).to(device), 1.-alpha + 1e-10], -1), -1)[:, :-1]
But, the actual code is not. Can I guess why this code is different from the DSNeRF paper? Additionally, can I ask why the '-' term is added in front of torch.log(weights)?Thanks