Open YessionCC opened 1 year ago
I think your code is roughly correct, I also tried to implement ref-nerf recently, here is how I extracted the normals:
x.requires_grad_(True)
sigmas, h = self.density(x, return_feat=True)
n_grad = -torch.autograd.grad(sigmas.sum(), x, retain_graph=True)[0]
n_grad = l2_normalize(n_grad)
# finally volume render n_grad to get the rendered normal
I'm using volume rendering to get the normal as in ref-nerf, but I think your surface normal is also reasonable. At first I get NaN values from pretrained models too, and I realized that it's because we're using exp
as activation, and float16
maxed out at 65504, so many sigmas
are actually too large to compute the correct gradient (I think). So I changed the activation to torch.nn.functional.softplus
and trained again, this time the normal is "visually correct" (planar surfaces have roughly the same normal vector) but the drawback of changing the activation is that the PSNR slightly decreased.
I haven't been able to solve the noisy vector issue, nor did I succeed in implementing ref-nerf however; I implemented all the components and the result is still not correct so I gave up. I would like to hear from you if you're trying to do the same thing.
Thank you for your reply. I am trying to reproduce NeRFactor(https://arxiv.org/abs/2106.01970) using the instant-ngp framework. I have compared the earliest NeRF paper (Mildenhall 2020) with your implementation of instant-ngp in the task of calculating surface normals. It seems that NeRF (Mildenhall 2020) can provide better results (although the noise is still too large to be used). In instant-ngp, I can hardly identify objects from the normal map. So I finally gave up trying.
It seems that the normal can't be calculated as dσ/dx in instant-ngp. Using NeRF SDF may be a better solution. (This is just my guess)
I have tried to use
torch.autograd.grad()
method to calculate dσ/dx The code is roughly as follows:def get_surface_normals(depth, rays_o, rays_d, model):
surface_points = rays_o + rays_d*depth
surface_points.requires_grad_(True)
sigmas = model.density(surface_points)
normals = torch.autograd.grad(sigmas, surface_points, torch.ones_like(sigmas))
normals = -normalize(normals)
return normals
but the result seems too noisy and has many NaN values. Is my code wrong or something else?