harvardnlp / encoder-agnostic-adaptation

Encoder-Agnostic Adaptation for Conditional Language Generation
https://arxiv.org/abs/1908.06938
MIT License
79 stars 13 forks source link

Difficulty in evaluating perplexity in story generation task #10

Closed fangleai closed 4 years ago

fangleai commented 4 years ago

Thanks for the source code sharing. I re-train the model in story generation task on Writingprompts dataset using almost same config (except less GPUs) and your provided .bpe files. My goal is to evaluate the test perplexity as reported in Table 3 of your paper.

However, the only way I find promising seems not working: the OpenNMT library in your code can evaluate "GOLD score" for the target sequences. The GOLD score is printed after running 'python translate.py' with given target. However, I got unreasonable PPL results as following: PRED AVG SCORE: -0.0040, PRED PPL: 1.0040 GOLD AVG SCORE: -9.7727, GOLD PPL: 17548.5739 The GOLD scores are incredibly high. I am wondering how you evaluate the PPL just as you reported in Table 3. Whether or not you would like to share this evaluation code in the repository? Many thanks.

fangleai commented 4 years ago

Linked to pull request https://github.com/harvardnlp/encoder-agnostic-adaptation/pull/11