Closed fish128 closed 7 years ago
This indicates that the model is overfitting. It continues to get better and better at fitting the data that it sees (training data) while getting worse and worse at fitting the data that it does not see (validation data).
@jerheff Thanks for your reply. We can say that it's overfitting the training data since the training loss keeps decreasing while validation loss started to increase after some epochs. However, both the training and validation accuracy kept improving all the time. How can we explain this?
The graph test accuracy looks to be flat after the first 500 iterations or so. It can remain flat while the loss gets worse as long as the scores don't cross the threshold where the predicted class changes.
@jerheff Thanks so much and that makes sense! One more question: What kind of regularization method should I try under this situation?
@fish128 Did you find a way to solve your problem (regularization or other loss function)? Thanks in advance.
I am training a deep CNN (using vgg19 architectures on Keras) on my data. I used "categorical_cross entropy" as the loss function. During training, the training loss keeps decreasing and training accuracy keeps increasing slowly. But the validation loss started increasing while the validation accuracy is not improved. The curve of loss are shown in the following figure: It also seems that the validation loss will keep going up if I train the model for more epochs. Does anyone have idea what's going on here? thanks!
Just as jerheff mentioned above it is because the model is overfitting on the training data, thus becoming extremely good at classifying the training data but generalizing poorly and causing the classification of the validation data to become worse. You could solve this by stopping when the validation error starts increasing or maybe inducing noise in the training data to prevent the model from overfitting when training for a longer time.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
Even I am also experiencing the same thing. Validation loss is increasing, and validation accuracy is also increased and after some time ( after 10 epochs ) accuracy starts dropping.
The question is still unanswered. Validation loss increases but validation accuracy also increases.
The question is still unanswered. Validation loss increases but validation accuracy also increases.
Does this indicate that you overfit a class or your data is biased, so you get high accuracy on the majority class while the loss still increases as you are going away from the minority classes?
I would like to have a follow-up question on this, what does it mean if the validation loss is fluctuating ? and not monotonically increasing or decreasing ?
I think your model was predicting more accurately and less certainly about the predictions
I think your model was predicting more accurately and less certainly about the predictions
What does this even mean? I have the same situation where val loss and val accuracy are both increasing.
And when I tested it with test data (not train, not val), the accuracy is still legit and it even has lower loss than the validation data!
Very confusing......
this question is still unanswered i am facing same problem while using ResNet model on my own data.
Learning rate: 0.0001
73/73 [==============================] - 9s 129ms/step - loss: 0.1621 - acc: 0.9961 - val_loss: 1.0128 - val_acc: 0.8093
Epoch 00100: val_acc did not improve from 0.80934
how can i improve this i have no idea (validation loss is 1.01128 👎 )
Who has solved this problem? any one can give some point?
Check your model loss is implementated correctly. It kind of helped me to spot a bug.
On Fri, Sep 27, 2019, 5:12 PM sanersbug notifications@github.com wrote:
Who has solved this problem? any one can give some point?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/keras-team/keras/issues/3755?email_source=notifications&email_token=ACRE6KF7OYJRTHAHJYQZKUTQLXWSZA5CNFSM4CPMOKN2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD7YUCJY#issuecomment-535904551, or mute the thread https://github.com/notifications/unsubscribe-auth/ACRE6KA7RIP7QGFGXW4XXRTQLXWSZANCNFSM4CPMOKNQ .
I would like to have a follow-up question on this, what does it mean if the validation loss is fluctuating ? and not monotonically increasing or decreasing ?
I experienced the same issue but what I found out is because the validation dataset is much smaller than the training dataset. This causes the validation fluctuate over epochs.
我也遇到这个问题
Hello I also encountered a similar problem. My training loss and verification loss are relatively stable, but the gap between the two is about 10 times, and the verification loss fluctuates a little, how to solve
I have the same problem my training accuracy improves and training loss decreases but my validation accuracy gets flattened and my validation loss decreases to some point and increases at the initial stage of learning say 100 epochs (training for 1000 epochs), Even though I added L2 regularisation and also introduced a couple of Dropouts in my model I still get the same result. I am working on a time series data so data augmentation is still a challege for me. I used 80:20% train:test split. Can anyone suggest some tips to overcome this? Thanks in advance
This might be helpful: https://discuss.pytorch.org/t/loss-increasing-instead-of-decreasing/18480/4
The model is overfitting the training data. To solve this problem you can try 1.Regularization 2.Try to add more add to the dataset or try data augumentation
I'm experiencing similar problem. I tried regularization and data augumentation. could you give me advice?
How about adding more characteristics to the data (new columns to describe the data)? Could it be a way to improve this? (I'm facing the same scenario)
I also experienced this. After normalize target(or y) column, validation loss is slowly deceased. I recommend you to apply min-max or log normalization.
I have the same problem my training accuracy improves and training loss decreases but my validation accuracy gets flattened and my validation loss decreases to some point and increases at the initial stage of learning say 100 epochs (training for 1000 epochs), Even though I added L2 regularisation and also introduced a couple of Dropouts in my model I still get the same result. I am working on a time series data so data augmentation is still a challege for me. I used 80:20% train:test split. Can anyone suggest some tips to overcome this? Thanks in advance
Hello, I am facing exactly the same issue, were you able to solve it >?
Hello, Did anyone fix this issue?
I am training a deep CNN (4 layers) on my data. I used "categorical_crossentropy" as the loss function. During training, the training loss keeps decreasing and training accuracy keeps increasing until convergence. But the validation loss started increasing while the validation accuracy is still improving. The curves of loss and accuracy are shown in the following figures:
It also seems that the validation loss will keep going up if I train the model for more epochs. Does anyone have idea what's going on here?
Thanks a lot!