Closed KatieBelli closed 6 months ago
It is normal because the loss function is changed from v6.
Also -
indicates a negative value.
It is normal because the loss function is changed from v6. Also
-
indicates a negative value.
Is there a short explanation about this? The higher those values the better? And they have to be negative first and towards the end of a perfect training a 0?
I'm confused because I was used to the values before version 6 (beta). I'm thankful that you regularly update this code.
Of course, the smaller the loss, the better. 0 is better than 0.1 and -0.1 is better than 0.
Thanks. How do we recognize that the model is nearly finished?
For example: current epoch = 42 training loss = -0.008521 validation loss = -0.009089
Is the "goal" to reach a value of like -0.01 (validation loss)? In version 5 the values decreased and I thought it must reach a value of 0 to be a perfectly trained model.
In the previous version I had the following values for epoch 0 (same dataset used in version v6.0.0b1 and 5.0.5):
training loss = 0.001006, validation loss = 0.001161
Now in version v6.0.0b1 I have those values for epoch 0:
training loss = -0.005903, validation loss = -0.007134
Are those values normal? I'm confused because there is a "-" and the values are much higher.