THUDM / AgentBench

A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
https://llmbench.ai
Apache License 2.0
2.24k stars 162 forks source link

[Bug/Assistance] - Reproducing Results on Alfworld (HH) (vs. ReAct paper) #127

Open ai-nikolai opened 8 months ago

ai-nikolai commented 8 months ago

Bug / Assistance Description The results that are reported in the HH column are very different to the ReAct paper. In particular, ReAct reports

To Reproduce See screenshots below. Your results in HH column indicate 16% success for text-davinci-002 or gpt-3.5-turbo. However, the reults using text-davinci-002 on ReAct indicate 78% (second screenshot). This is a significant difference.

Screenshots or Terminal Copy&Paste

AgentBench ReAct Paper

Concrete Questions / Actions: Please tell us:

  1. How your evaluation for Alfworld (HH) differs from ReAct?
  2. Which exact model you used?
  3. Which prompts you used (1-shot, 2-shot), and are they the same as from the ReAct paper?
  4. Why are the results so different?
ai-nikolai commented 8 months ago

@cenyk1230 @Btlmd @1049451037 @zfjsail

zhc7 commented 8 months ago

Please read the paper carefully. You can find all the prompt in appendix or code. The results are different because 1. we are not using the same prompt. 2. we are not using exactly the same envrionment.

ai-nikolai commented 8 months ago

Thanks for coming back @zhc7.

  1. Thanks for clarifying, yes in appending G.2 a prompt example can be seen, which I guess corresponds to either: a. https://github.com/THUDM/AgentBench/blob/main/src/server/tasks/alfworld/prompts/alfworld_multiturn_react.json b. https://github.com/THUDM/AgentBench/blob/main/src/server/tasks/alfworld/prompts/alfworld_multiturn_plan_first.json

  2. Can you elaborate how the environment is not exactly the same? [Do you use a different version of alfworld, etc.?]

The reason for asking about this question is to understand whether you were able to get close to the results reported in ReAct and what the exact difference might be, as the results of ReAct seem quite impossible to reproduce.

zhc7 commented 8 months ago

Hi, @ai-nikolai sorry for the late reply, we've been quite busy lately. To answer your question, I believe the main difference is the prompting technique. We weren't aiming to reproduce the ReAct's result, but to design a prompt and a evaluation process that is relatively fair to all the models. The prompt we used is listed in paper Appendix G. The evaluation process is located at https://github.com/THUDM/AgentBench/blob/2f3c343494464762888d0d0da4509ea5411906c6/src/server/tasks/alfworld/task.py#L105 .

Can you elaborate how the environment is not exactly the same? [Do you use a different version of alfworld, etc.?]

The main differences are about adapting the alfworld to the framework and set some limitations and rules to avoid prolonged evaluation.

To sum up, you may have to do some more investigations on this problem.