As defined in SigmaLoss, sigmaloss = -torch.log(weights) * torch.exp(-(z_vals - depths[:, None]) * 2 / (2 1)) dists, and weights = alpha torch.cumprod(torch.cat([torch.ones((alpha.shape[0], 1)).to(alpha.device), 1. - alpha + 1e-10], -1),-1)[:, :-1], raw2alpha = lambda raw, dists, act_fn=F.relu: 1. - torch.exp(-act_fn(raw) * dists) , as sometime raw could be negative,act_fn(raw) will be 0, then raw will be 0, weights will also be 0, then log(weights) will be inf, so the sigmaloss will be inf. Then in backward, there should be some errors. Have you encountered such a situation?
As defined in SigmaLoss, sigmaloss = -torch.log(weights) * torch.exp(-(z_vals - depths[:, None]) * 2 / (2 1)) dists, and weights = alpha torch.cumprod(torch.cat([torch.ones((alpha.shape[0], 1)).to(alpha.device), 1. - alpha + 1e-10], -1),-1)[:, :-1], raw2alpha = lambda raw, dists, act_fn=F.relu: 1. - torch.exp(-act_fn(raw) * dists) , as sometime raw could be negative,act_fn(raw) will be 0, then raw will be 0, weights will also be 0, then log(weights) will be inf, so the sigmaloss will be inf. Then in backward, there should be some errors. Have you encountered such a situation?