Open drtonyr opened 10 months ago
Hi Tony, Thanks for reporting, Here are the line-by-line replies.
It looks like --nce has been replaced by --loss nce and README.md should be updated.
Yes
firstly is the warning from rnn.py,
Here we use single-layer RNN so drop-out config is dismissed. Should be fixed later
secondly the perplexities are all zero.
Something must be wrong
Moving on to NCE, the reported train PPL is very low, the valid PPL very high.
Since the loss criterion is different during NCE training and evalutation, which is (NCE vs Cross-Entropy). The training PPL is just the perplexity between the noise samples and positive samples, it should be, by definition, lower than real Perplexity within the whole vocabulary.
Looking forward to further discussion!
Hey, thanks for getting back to me so quickly.
I'm not really concerned about argparse or dropout issues. This is the best public code for NCE in Language Modelling I could find, that's a great achievement.
Zero perplexities is not something I can easily look into, and is quite a blocker for someone like me just starting with the code.
I can help with the reported PPL under NCE. Firstly, for large tasks, NCE will self-normalise. That is \sum exp(x_i) will be about 1. When this happens you can report approx standard perplexity during training (dev/test sets are much smaller, it's good to report exact PPL by normalising).
It has been ten years since I really got into this, I hope I haven't forgotten too much.
README.md refers to option
--nce
, for examplepython main.py --cuda --noise-ratio 10 --norm-term 9 --nce --train
example/utils.py
does not have--nce
insetup_parser()
Result:
It looks like
--nce
has been replaced by--loss nce
and README.md should be updated.It's not clear that the rest of the code still works. This has two issues:
firstly is the warning from rnn.py, secondly the perplexities are all zero.
Moving on to NCE, the reported train PPL is very low, the valid PPL very high.