jfzhang95 / pytorch-deeplab-xception

DeepLab v3+ model in PyTorch. Support different backbones.
MIT License
2.89k stars 779 forks source link

validation loss always lower than train loss #225

Open ycc66104116 opened 2 years ago

ycc66104116 commented 2 years ago

hi, i recently use the code to train my own dataset. and i found something strange, that is on tensorboard, my val loss is always lower then train loss. however the train acc is higher than val acc. the important thing is i don't know why my val loss is always lower than train loss. the gap between the two curves maintain the same, like the image below. i can't make the two converge. image

i know there are something different in model.train() and model.eval(), but i expect the curves can converge as the epoch increase. does anyone encountered this question before? would this happened when training dataset is too small?