Mikoto10032 / AutomaticWeightedLoss

Multi-task learning using uncertainty to weigh losses for scene geometry and semantics, Auxiliary Tasks in Multi-task Learning
Apache License 2.0
575 stars 81 forks source link

Why avoid the loss of becoming negative #17

Open SITUSITU opened 1 year ago

SITUSITU commented 1 year ago

Thanks for your work and I have a question

Why can't the loss be negative? It seems to me that the value of the loss does not affect the training of the network.

As an example, let's say my loss is the cross-entropy loss, which is (0, 1) most of the time, and the optimization goal is to minimize the loss.

Now suppose I add a constant of -100 to the loss. Loss = loss-100. The loss will be (-100, -99), and the optimization goal remains the same: reduce the loss

The way to reduce the loss is gradient descent. Obviously, the constant -100 does not affect the gradient of the network parameters, that is, the loss does not seem to affect the training process, what is important is the gradient of this value to the network parameters.

Now back to the original question, why is it necessary to avoid negative losses?

wufei-png commented 11 months ago

Because the backpropagation process, the positive or negative of the loss determines the positive or negative of the gradient?