Open languandong opened 2 years ago
any difference?
@languandong
You can use both, doesn't matter as long as optimizer.zero_grad()
is called before loss.backward()
.
Note that optimizer.zero_grad()
zeroes out the gradients in the grad
field of the tensors, and loss.backward()
compute s the gradients which are then stored in the grad
field.
As pointed out by @languandong, the critical factor is the correct sequence in which optimizer.zero_grad() and loss.backward() are called. Both code snippets are valid as long as optimizer.zero_grad() is invoked before loss.backward(). This ensures that the gradients are properly zeroed out and then computed and stored in the appropriate tensors' grad field.
@languandong I think the confusion originates from the misconception that the gradient would be computed and stored during the forward pass. In fact, in the forward pass, only the DAG is constructed. The grad is computed in a lazy mode: it is not computed until explicit loss.backward()
is invoked.
I think the correct way the code the training is that
not that