Closed chenfh21 closed 1 day ago
sorry for the delay in answering your question.
I think all baselines use the best validation error checkpoint, thus, even if there is overfitting happening, the checkpoint with the highest IoU on the validation set is used. (At least that's a safe way to avoid overfitting, which always happens at a certain point, even with larger datasets.)
sorry for the delay in answering your question.
I think all baselines use the best validation error checkpoint, thus, even if there is overfitting happening, the checkpoint with the highest IoU on the validation set is used. (At least that's a safe way to avoid overfitting, which always happens at a certain point, even with larger datasets.)
Thank you very much for your reply. Consistent with my previous idea, the current method can only consider avoiding overfitting, and the absolute overfitting prevention method that only trains on large-scale datasets and then performs transfer learning seems unreliable for the method itself.
Hi: My research utilizes the Phenobench dataset for semantic segmentation. I am curious if the baseline training script has considered the risk of overfitting. Although I understand the use of data augmentation training techniques and relatively large epochs, the scale of the dataset is much smaller than that of the general domain. Is there anything else that I haven't noticed?