Problem:
I ran the pretrain.py to generate a response encoder whose acc is about 0.2 while your filename in shell like 'epoch78_batch99999_acc0.99' indicates that you gain a pre-training model with nearly full performance.
Detail Experiment:
I test this pretrain on WMT14(EN2DE) dataset the same as your shell. Using Subword-Nmt to preprocess the dataset and using news2013 as dev.txt. Simultaneously,I haven't change any parameters set in your pretrain.sh.
Would you like to help me fix the problem?
Sincerely waiting for your guide to reproduce the work at the same time.
Problem: I ran the pretrain.py to generate a response encoder whose acc is about 0.2 while your filename in shell like 'epoch78_batch99999_acc0.99' indicates that you gain a pre-training model with nearly full performance. Detail Experiment: I test this pretrain on WMT14(EN2DE) dataset the same as your shell. Using Subword-Nmt to preprocess the dataset and using news2013 as dev.txt. Simultaneously,I haven't change any parameters set in your pretrain.sh.
Would you like to help me fix the problem? Sincerely waiting for your guide to reproduce the work at the same time.