bigcode-project / bigcode-evaluation-harness

A framework for the evaluation of autoregressive code generation language models.
Apache License 2.0
710 stars 183 forks source link

Different stop words between humaneval and instruct humaneval #119

Closed GinaJihyeonLee closed 11 months ago

GinaJihyeonLee commented 12 months ago

Hi, thanks for adding the instruct_humaneval task. Why the stop words are different between humaneval and instruct_humaneval tasks? stop words for humaneval are ["\nclass", "\ndef", "\n#", "\n@", "\nprint", "\nif"], while those for instruct_humaneval are ["if name", "\nprint", "\nclass"].

ArmelRandy commented 11 months ago

Hi. The stop words for HumanEval (HE) are mostly inherited from the Codex paper. They prevent the model from generating irrelevant text after the code it is supposed to complete. Now, for instruct_humaneval (IHE), the idea is similar. You'll notice that some stop words were removed compare to HE, namely \n@, \n# but actually we could have kept them. IHE also handles the case where we ask the LM to generate a solution just from the docstring (Not Code Completion). In such case we can not have \ndef as a stop word. Also, the model can additionally generate test cases with the help of if __name__ == "__main__" or it can try to embed the implementation in a class (e.g. class Solution). These are some scenarios that we want to avoid, because the evaluation only cares about a function implementation.

TL; DR : compare to HE, if __name__ kind of replaces \nif, \ndef can not be used as stop word (explained earlier), \print and \nclass are kept. We found that \n@ and \n# did not occur much in failure cases, but they can be kept also.