Closed jeicy07 closed 6 years ago
@jeicy07 Hi, we have never met such a problem you mentioned. Have you checked the log file ? how much is the accuracy and the loss? Additionally, we suggest checking the input.
I faced the same problem.I trained it with one of the low resource language pairs. I figured it out. Output sentence is blank because all of its tokens are 0(for pad). It happened because some of the output has '/S'-end of sentence token at 0th index. So, it truncated entire sentence and filled with pad. At last,we just had empty sentence. It just happened in the beginning of training(in initial epochs) but I didn't encounter empty sentences in later epochs. But it will give 'division by 0' if we try to calculate bleu score because length of sentence is less than some default length. I added '/s' token whenever it generated empty sentences to get some bleu score at last.
Hi, after I've pretrained the generator within 200 epochs, I started generating negative samples. However, I found that 90% of the outputs are blank space. I wonder what kind of problems may cause this output? Thanks!