Closed NK-CS-ZZL closed 1 year ago
Hi,
This is because the code warped will compute derived normals as done here. The process of computing derived normals will call torch.autograd.grad
, for which gradient computing is necessary. Since I warp the evaluation code with torch.no_grad
(such as here ), an error will arise when evaluating if I don't warp the code you mentioned with torch.enable_grad()
because there will be no gradient by default when evaluating.
Hope my answer can solve your problem.
Since the function compute_derived_normals
has been decorated by @torch.enable_grad()
, we do have the gradient in this func whether we use torch.enable_grad
or torch.no_grad
outside it.
Besides, using torch.no_grad
prevents the gradient accumulation outside the func, which is more vram-friendly.
In my environment, these two settings provide the same results, but using torch.no_grad
leads to less memory consumption.
You are right. The code you mentioned is unnecessary. I will delete it later. Thanks for your suggestion!
You're welcome and thanks for the reply.
An amazing work and congrats! I have a question in the code and hope for a reply if you're convenient. In the file
renderer.py
Line 37, the TensoIR model is warped bytorch.enable_grad()
, so I wonder why we need the gradient here?