Haian-Jin / TensoIR

[CVPR 2023] TensoIR: Tensorial Inverse Rendering
https://haian-jin.github.io/TensoIR/
MIT License
237 stars 12 forks source link

[Code] Enable gradient in renderer.py #12

Closed NK-CS-ZZL closed 1 year ago

NK-CS-ZZL commented 1 year ago

An amazing work and congrats! I have a question in the code and hope for a reply if you're convenient. In the file renderer.py Line 37, the TensoIR model is warped by torch.enable_grad(), so I wonder why we need the gradient here?

Haian-Jin commented 1 year ago

Hi,

This is because the code warped will compute derived normals as done here. The process of computing derived normals will call torch.autograd.grad, for which gradient computing is necessary. Since I warp the evaluation code with torch.no_grad (such as here ), an error will arise when evaluating if I don't warp the code you mentioned with torch.enable_grad() because there will be no gradient by default when evaluating.

Hope my answer can solve your problem.

NK-CS-ZZL commented 1 year ago

Since the function compute_derived_normals has been decorated by @torch.enable_grad(), we do have the gradient in this func whether we use torch.enable_grad or torch.no_grad outside it. Besides, using torch.no_grad prevents the gradient accumulation outside the func, which is more vram-friendly. In my environment, these two settings provide the same results, but using torch.no_grad leads to less memory consumption.

Haian-Jin commented 1 year ago

You are right. The code you mentioned is unnecessary. I will delete it later. Thanks for your suggestion!

NK-CS-ZZL commented 1 year ago

You're welcome and thanks for the reply.