Closed jeff42e closed 4 years ago
@jeff42e overfitting is indicated by increasing validation losses. See https://en.wikipedia.org/wiki/Overfitting
@jeff42e overfitting is indicated by increasing validation losses. See https://en.wikipedia.org/wiki/Overfitting
@glenn-jocher thank you for the response!
after my validation loss has decreased gradually during the training, this is a successful training, isn't it? And regarding the second question: Why does the model find no objects, when I upload the test images separately and apply the same weights on it?
@jeff42e yes, decreasing validation losses are the intended result.
I can't speak to your question on anecdotal results.
hi @glenn-jocher, during the YOLOv5 model training (with 500 epochs), validation losses began to rise around epoch 200, while mAP50 and mAP50-95 metrics continued to improve. Could you please advise on the implications of this trend? Does it suggest potential overfitting in our model? Thank you
@wtjasmine hi there,
Thanks for reaching out. When validation losses start to rise while the mAP metrics are still improving, it could indicate potential overfitting in your model. It is important to keep an eye on validation losses as they serve as a measure of generalization performance. Consider analyzing other evaluation metrics and conducting further validation to ensure the robustness of your trained model.
Feel free to let me know if you have any further questions.
Regards
@glenn-jocher Thank you for your explanation!
@wtjasmine you're welcome! I'm glad I could help explain the situation to you. If you have any further questions or need assistance with anything else, feel free to ask. Have a great day!
Hi @glenn-jocher, I have another question about the YOLOv5 model. Is it possible for the metrics on the test set to be better than those on the validation set? If my model performs better on the test set compared to the validation set, does this indicate overfitting? Thank you
@wtjasmine hi there,
It's possible for the metrics on the test set to be better than those on the validation set, although it's not a common scenario. If your model consistently performs better on the test set compared to the validation set, it might suggest potential overfitting. However, it's important to conduct further analysis and evaluate other factors before reaching a conclusion. Overfitting is typically indicated by increasing validation losses or deteriorating performance on unseen data. Keep a close eye on these aspects to ensure the robustness of your model.
Let me know if you have any more questions.
Regards
Hi @glenn-jocher, I am currently observing fluctuations in the validation losses across epochs, and it appears that overfitting is occurring in the early stages of training. I am seeking your expertise on whether it would be advisable to adjust hyperparameters, such as the learning rate, to address this issue. Does fluctuation of validation losses across epochs also indicates the overfitting of model? Your guidance in this matter would be greatly appreciated.
@wtjasmine hi there,
Fluctuations in validation losses across epochs can indicate potential overfitting in the early stages of training. Adjusting hyperparameters, such as the learning rate, could be a possible solution to address this issue. However, it's important to carefully analyze the behavior of other performance metrics and consider conducting further validation to make an informed decision. Feel free to experiment with different hyperparameter settings and monitor the overall performance of your model for better generalization.
Let me know if you have any more questions or need further assistance.
Thanks!
This is my training results, seem it does not converge and val/obj_loss started to increase around epoch 20, with this kind of fluctuation, does this indicate potential overfitting? Could you please suggest the further validation should I conduct in order to make an informed decision?
@wtjasmine hi there,
Thank you for sharing your training results. Based on the provided graph, it does appear that the validation/objective loss starts to increase around epoch 20, which could indicate potential overfitting.
To validate this further, here are a few suggestions:
Monitor other performance metrics: Besides the validation/objective loss, keep an eye on other metrics such as mAP (mean Average Precision) or accuracy. If these metrics also start to deteriorate or show inconsistencies, it could support the overfitting hypothesis.
Conduct validation on separate data: Use a separate dataset for validation to evaluate the model's performance on unseen data. This will help verify if the model is indeed overfitting to the training data.
Regularization techniques: Consider using regularization techniques, such as weight decay or dropout, to mitigate overfitting. These techniques can help prevent the model from excessively memorizing training data and encourage better generalization.
Remember that it's essential to analyze the overall behavior of your model and consider multiple factors before making any conclusive decisions.
Let me know if there's anything else I can assist you with.
Thanks!
I have trained YOLOv5 on customized dataset and my results look like this:
How do I know if my network has overfitted during the training? The mAP has finally reached a very high value.
if I apply the model to my test images, I get a very good result. However, if I upload my trained weights and look at the same images I used for test purposes again, my network will not find any objects in the image. How can this be explained?