dongjun-Lee / text-summarization-tensorflow

Tensorflow seq2seq Implementation of Text Summarization.
MIT License
636 stars 196 forks source link

the loss of my training and testing has been declining. I tested it with epoch 85th, but the effect is not so good. How many times does this epoch have to be better? #15

Open whaozl opened 5 years ago

dongjun-Lee commented 5 years ago

Do you mean 85 batches? To train 85 epochs, it will take too much time. I trained two epochs to generate sample output as here. For example,

python train.py --num_epochs=2
pnagula commented 5 years ago

please let me know if you have run training for 2 epochs on all the gigaword data 3.8M rows or just 50K sample rows