yunjey / pytorch-tutorial

PyTorch Tutorial for Deep Learning Researchers
MIT License
29.54k stars 8k forks source link

some question about the position of 'optimizer.zero_grad()' #238

Open languandong opened 2 years ago

languandong commented 2 years ago

I think the correct way the code the training is that

    optimizer.zero_grad()
    # Forward pass
    outputs = model(images)
    loss = criterion(outputs, labels)

    # Backward and optimize
    loss.backward()
    optimizer.step()

not that

    # Forward pass
    outputs = model(images)
    loss = criterion(outputs, labels)

    # Backward and optimize
    optimizer.zero_grad()
    loss.backward()
    optimizer.step()
Vandaci commented 2 years ago

any difference?

silky1708 commented 1 year ago

@languandong
You can use both, doesn't matter as long as optimizer.zero_grad() is called before loss.backward(). Note that optimizer.zero_grad() zeroes out the gradients in the grad field of the tensors, and loss.backward() compute s the gradients which are then stored in the grad field.

githraj commented 8 months ago

As pointed out by @languandong, the critical factor is the correct sequence in which optimizer.zero_grad() and loss.backward() are called. Both code snippets are valid as long as optimizer.zero_grad() is invoked before loss.backward(). This ensures that the gradients are properly zeroed out and then computed and stored in the appropriate tensors' grad field.

luyuwuli commented 7 months ago

@languandong I think the confusion originates from the misconception that the gradient would be computed and stored during the forward pass. In fact, in the forward pass, only the DAG is constructed. The grad is computed in a lazy mode: it is not computed until explicit loss.backward() is invoked.