bigcode-project / bigcode-evaluation-harness

A framework for the evaluation of autoregressive code generation language models.
Apache License 2.0
825 stars 219 forks source link

Evaluation result of bigcode/starcoder2-3b on gsm8k_pal does not matched the paper #272

Open nongfang55 opened 2 months ago

nongfang55 commented 2 months ago

I tried to evaluate the model bigcode/starcoder2-3b on the benchmark pal-gsm8k-greedy using the command below

accelerate launch --main_process_port 6789 main.py --model bigcode/starcoder2-3b --max_length_generation 2048 --tasks pal-gsm8k-greedy --n_samples 1 --batch_size 1 --do_sample False --allow_code_execution

Then I got the result

  "pal-gsm8k-greedy": {
    "accuracy": 0.04624715693707354,
    "num_failed_execution": 1016
  },

Since the reported value in the paper starcoder2 is 27.7 (Table 14), the result from harness does not match the paper.

Is there someone else who can check out why this happened?