zhengqili / CGIntrinsics

This is the CGIntrinsics implementation described in the paper "CGIntrinsics: Better Intrinsic Image Decomposition through Physically-Based Rendering, Z. Li and N. Snavely, ECCV 2018".
MIT License
75 stars 17 forks source link

I think self.optimizer_G.zero_grad() needs to be located before self.forward_both() #2

Closed youngminpark2559 closed 5 years ago

youngminpark2559 commented 5 years ago

Hi. As I'm reading your awesome project code, I got wondering part. I think gradient deleting code needs to be located before forward() But now, gradient deleting code is being located after forward() In other words, calculated gradients are being deleted every time

I think this part https://github.com/lixx2938/CGIntrinsics/blob/master/models/intrinsic_model.py#L99 needs to be fixed like:

self.optimizer_G.zero_grad()  
self.forward_both()

Please let me hear your opinion whether my fixing is correct or not.

jundanl commented 5 years ago

I think it's ok. In pytorch, the gradient is accumulated after loss.backward(). We need to do optimizer.zero_grad() before .backward(). if we only need the gradient of this minibatch. I think .forward() and .zero_grad() don't have any effect on each other. Therefore, the order between .zero_grad() and .forward() doesn't matter.

I find the same question on pytorch forum. It can help you: https://discuss.pytorch.org/t/should-we-perform-optimizer-zero-grad-before-we-do-forward-pass-or-after-that/14254

youngminpark2559 commented 5 years ago

Thanks for information. I'll consider more.