Alexander-H-Liu / End-to-end-ASR-Pytorch

This is an open source project (formerly named Listen, Attend and Spell - PyTorch Implementation) for end-to-end ASR implemented with Pytorch, the well known deep learning toolkit.
MIT License
1.18k stars 317 forks source link

Very different loss on validation #31

Open kamilkk852 opened 5 years ago

kamilkk852 commented 5 years ago

Just for testing I'm trying to overfit a very small dataset and I've set validation dataset to be the same as the training one, but I get very different loss progression for these stages. On training set it is constantly decreasing, but on validation after a few epochs it starts to increase. I do not use dropout. Shouldn't it be roughly the same?

Alexander-H-Liu commented 4 years ago

Hi @kamilkk852

In my experience, validation loss typically start growing within a short period. You should evaluate your model base on validation error rate instead. Also, using A LOT of data for training is one nature of end-to-end ASRs, they usually performed poorly on small corpus.