noahshinn / reflexion

[NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning
MIT License
2.46k stars 240 forks source link

Can't reproduce HumanEval score #30

Closed geekan closed 10 months ago

geekan commented 1 year ago

follow programming_runs/run_reflexion.sh

get 0.77-0.83 scores for multi trials.

FloridSleeves commented 10 months ago

I cannot reproduce the result either 😭 Is it possible for the author to release the generated tests by GPT4 they use in the experiments?

noahshinn commented 10 months ago

Hi @geekan and @FloridSleeves,

As many LLM papers may be experiencing, we are subject to the performance of proprietary models as there is not a better open-source option to evaluate at a high level of performance. We showed some open-source models' results to prove this in the appendix of the recent version of the paper. If you want to use openai's models with Reflexion, I would advise you to use the -0314 suffix to the gpt-4 or gpt-3.5-turbo models to evaluate a model that was checkpointed at a closer time to our experiments. I hope that we will have more open-source options on which we can use reflexion in the future.

allanj commented 9 months ago

I just ran programming_runs/run_reflexion.sh directly, and also got 80% only..

and also the amount of human eval only has 161 (which I guess should be 164)?

xihuai18 commented 7 months ago

I just ran programming_runs/run_reflexion.sh directly, and also got 80% only..

and also the amount of human eval only has 161 (which I guess should be 164)?

I am also curious that HumanEval-python should be 164 tasks.

cassanof commented 7 months ago

I just ran programming_runs/run_reflexion.sh directly, and also got 80% only.. and also the amount of human eval only has 161 (which I guess should be 164)?

I am also curious that HumanEval-python should be 164 tasks.

We used the MultiPL-E benchmark, which includes 161 tasks, we also use MultiPL-E for our Rust experiments. The HumanEval dataset is not clean, so transformations are required for a sound evaluation. MultiPL-E does the following adjustments to the Python dataset:

Of the 164 original HumanEval benchmarks: (1) we exclude 3 benchmarks that have Python helper functions in their prompt; (2) we modify 2 benchmarks to use unit tests instead of randomized testing; and (3) for certain typed languages, we fail to compile up to 5 benchmarks with untranslatable types. These changes do not lead to significantly different results for Python

Going forward, I recommend people to use EvalPlus or avoid HumanEval altogether in favor of datasets which are guaranteed to not be included in the training dataset (e.g. LiveCodeBench)