hi, i recently use the code to train my own dataset. and i found something strange, that is on tensorboard, my val loss is always lower then train loss. however the train acc is higher than val acc.
the important thing is i don't know why my val loss is always lower than train loss. the gap between the two curves maintain the same, like the image below. i can't make the two converge.
i know there are something different in model.train() and model.eval(), but i expect the curves can converge as the epoch increase.
does anyone encountered this question before? would this happened when training dataset is too small?
hi, i recently use the code to train my own dataset. and i found something strange, that is on tensorboard, my val loss is always lower then train loss. however the train acc is higher than val acc. the important thing is i don't know why my val loss is always lower than train loss. the gap between the two curves maintain the same, like the image below. i can't make the two converge.
i know there are something different in model.train() and model.eval(), but i expect the curves can converge as the epoch increase. does anyone encountered this question before? would this happened when training dataset is too small?