wasiahmad / PLBART

Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].
https://arxiv.org/abs/2103.06333
MIT License
186 stars 35 forks source link

Why new nl_eval.py always outputs the "0" bleu score? #18

Closed LeeSureman closed 3 years ago

LeeSureman commented 3 years ago

I run the experiment on code_to_text. I use the old evaluation "scripts/code_to_text/evaluator.py" and the new evaluation "nl_eval.py" which you just released, and find that the nl_eval.py outputs a 0 bleu score and the old outputs a normal bleu score X}T~$XXALT(}4N7PEZ`P21 image

wasiahmad commented 3 years ago

It is a bug, nl_eval assumes the gold file is a text file, but in the code_to_text task it is a JSON file. Will be fixed shortly.

wasiahmad commented 3 years ago

fixed with commit fc1363783c17ccbc5747cb264da577a731965e9c