bigcode-project / bigcode-evaluation-harness

A framework for the evaluation of autoregressive code generation language models.
Apache License 2.0
781 stars 208 forks source link

Unable to execute the `MultiPL-E` task for `python` language #268

Open manthan0227 opened 1 month ago

manthan0227 commented 1 month ago

Hello, I am trying to execute bigcode-eval-harness for the MultiPL-E task then I can't do the generation for python language. I can generate the responses for the other 18 languages. Because if you look into the huggingface platform they don't have the humaneval-py dataset But this task is trying to load the dataset.