Open anilrgukt opened 8 years ago
Hi Anil,
the training script prints the negative log-likelihood, so if it decreases, that's good.
I don't have a tutorial for training an MCGSM, only this example: https://github.com/lucastheis/cmt#python-example
Lucas
Dear Lucas,
While using experiment/train.py along with validation, the loss_valid comes out to be a negative value. Whereas, the training loss and the loss calculated on test data using experiment/evaluate.py are both positive. Is it normal to get negative values for loss_valid even-though it is calculated as negative log likelihood ?
Thanks Regards, Akshat
How different are the numbers?
We are working with the BSDS300 dataset with batch size of 64 and 6 iterations for each mini-batch. While training, the negative log likelihood scores are initially 1.95 and eventually decrease to around 0.95 as shown in the following figure. Where as the validation loss score is 1.57 after initialization but we get -3.448, -3.508, -3.508, -3.513, -3.514 in the subsequent epochs. Also, the score evaluated on test data using evaluate.py is also postive (around 3.07)
Also a general doubt, when using the code/experiments/train.py what is the loss that it prints? is it avg. log likelihood? If so, while training the log likelihood should increase, am I right?
For me the log likelihood score is continually decreasing as the epochs progress.
thanks, Anil