DongjunLee / conversation-tensorflow

TensorFlow implementation of Conversation Models
143 stars 27 forks source link

Significant overfit on default hyperparameters on cornell-movie-dialogs config #11

Open lk251 opened 6 years ago

lk251 commented 6 years ago

Running: python main.py --config cornell-movie-dialogs --mode train

to the end (100000 steps) will result in a training loss of about 2.6, test loss of 8.4.

Which hyperparameters did you use? The resulting chatbot doesn't work very well (the one in your readme is a lot better).

Thank you!

DongjunLee commented 6 years ago

Seq2Seq model has a problem that there is a gap between training and inference. You can check practical tips for training sequence-to-sequence models with attention in this blog

I don't remember the model's detail because I worked on it before. I guess, The result in readme is more overfitting than yours. (200,000 steps)

And cornell-movie-dialogs is too small to train conversation model. Lack of data means that overfitting is inevitable.

lk251 commented 6 years ago

Thanks for the advice!