Open ai-nikolai opened 8 months ago
@cenyk1230 @Btlmd @1049451037 @zfjsail
Please read the paper carefully. You can find all the prompt in appendix or code. The results are different because 1. we are not using the same prompt. 2. we are not using exactly the same envrionment.
Thanks for coming back @zhc7.
Thanks for clarifying, yes in appending G.2 a prompt example can be seen, which I guess corresponds to either: a. https://github.com/THUDM/AgentBench/blob/main/src/server/tasks/alfworld/prompts/alfworld_multiturn_react.json b. https://github.com/THUDM/AgentBench/blob/main/src/server/tasks/alfworld/prompts/alfworld_multiturn_plan_first.json
Can you elaborate how the environment is not exactly the same? [Do you use a different version of alfworld, etc.?]
The reason for asking about this question is to understand whether you were able to get close to the results reported in ReAct and what the exact difference might be, as the results of ReAct seem quite impossible to reproduce.
Hi, @ai-nikolai sorry for the late reply, we've been quite busy lately. To answer your question, I believe the main difference is the prompting technique. We weren't aiming to reproduce the ReAct's result, but to design a prompt and a evaluation process that is relatively fair to all the models. The prompt we used is listed in paper Appendix G. The evaluation process is located at https://github.com/THUDM/AgentBench/blob/2f3c343494464762888d0d0da4509ea5411906c6/src/server/tasks/alfworld/task.py#L105 .
Can you elaborate how the environment is not exactly the same? [Do you use a different version of alfworld, etc.?]
The main differences are about adapting the alfworld to the framework and set some limitations and rules to avoid prolonged evaluation.
To sum up, you may have to do some more investigations on this problem.
Bug / Assistance Description The results that are reported in the HH column are very different to the ReAct paper. In particular, ReAct reports
To Reproduce See screenshots below. Your results in HH column indicate 16% success for text-davinci-002 or gpt-3.5-turbo. However, the reults using text-davinci-002 on ReAct indicate 78% (second screenshot). This is a significant difference.
Screenshots or Terminal Copy&Paste
Concrete Questions / Actions: Please tell us: