Closed Mrils closed 4 years ago
I think it's fine, as log(x)
can always have a negative value when x < 1
.
So what is the direction of network optimization? Minimize the absolute value of network loss?
Hi @Mrils that's correct. The optimization process pushes the denominator in Eq. 12,13 to be high, thus avoiding both the second term and the total loss to reach -infinite.
So will the loss stabilize around a negative value after some epochs?
Correct. In our experiments, this occurred quite early
Got it! Thank you
Dear Mrils, I have trouble writing the loss function when reproduction the training, could you share me the loss function codes and I would be very grateful to you!
Hi! When I try to increase the uncertainty part (described as the log section in your paper) in the network, the loss changed to negative. Is this normal? How did you solve it?