noahshinn / reflexion

[NeurIPS 2023] Reflexion: Language Agents with Verbal Reinforcement Learning
MIT License
2.46k stars 240 forks source link

Reproducing Alfworld Results #35

Open ai-nikolai opened 10 months ago

ai-nikolai commented 10 months ago

Hi,

Thanks for the great work. Unfortunately, we are unable to reproduce your results for ReAct / Reflexion on Alfworld.

E.g. Env0 & Env1 are successful for you, however, we always get failures on our end. (Other Envs are successful though, so it does work sometimes).

@noahshinn

noahshinn commented 10 months ago

Hi @ai-nikolai , what model are you using?

ai-nikolai commented 10 months ago

Thanks. The model used: gpt-3.5-turbo @noahshinn

ai-nikolai commented 10 months ago

@noahshinn would it also be possible to upload the actual game logs for alfworld as well?

noahshinn commented 10 months ago

The model gpt-3.5-turbo is not the same model used during the paper's time (Feb 2023). We used text-davinci-002. I'd expect that the mistakes you see result from the inferred action not matching any of the actions in the action space. We followed ReAct's implementation for AlfWorld results to stay consistent with their work.

To aid this, I would advise you to display the action space to the model to eliminate parsing errors. I can add a side implementation for this if it would be helpful for you. Also, I will dig to see if I can find the original log files from the text-davinci-002 runs.

ai-nikolai commented 10 months ago

Thank you @noahshinn.

Please let us know, if there was any luck finding the original logs using text-davinci-002. This would be a really big help. Thank you.

dong-river commented 9 months ago

I had the same issue with got-3.5-turbo. The success rate seems much much lower. The first trial success rate for me on a subset of tasks is only around 17% which is consistent with the report from Agentbench paper. So if you could provide the original log would be really helpful

ai-nikolai commented 9 months ago

Hi all,

A couple of comments to follow-up on this:

  1. The results you report are very hard to reproduce. (The model you used text-davinci-002 is deprecated, the two alternatives davinci-002 and gpt-3.5-turbo both have an accuracy of 0.3 on a subset, while your reported results have 0.7). Could you provide the traces, or tell us how we could produce your results.
  2. Secondly, please see attached the screenshot from AgentBench. The relevant column is HH, where you can see that only GPT-4 achieves comparable results to your ReAct results. While text-davinci-002 (which is the model your code shows, only achieves 16%, which is in-line with our reproducibility experiments).
  3. Finally, the original ReAct paper implemented the success condition using info["won"]==True, while you use done==True. This is referenced in the original alfworld repository as an issue https://github.com/alfworld/alfworld/issues/51

Concrete Actions / Questions:

  1. Please clarify how to get the results you get? (With the weaker models, or were stronger models used, or do you have traces)
  2. Please clarify if we mis-understand your results or whether they are actually 70+% or more closer to 30%?

@noahshinn @ysymyth @becklabs

Screenshot 2024-03-08 at 15 02 50
ai-nikolai commented 8 months ago

@noahshinn - any updates on the above?

CSUN1997 commented 6 months ago

Hi @ai-nikolai, I am also trying to reproduce the results. The performance was bad in the beginning. After adding these lines to parse the action, the performance went back to normal:

image
ai-nikolai commented 2 weeks ago

@CSUN1997 @noahshinn @dong-river @ysymyth - It seems there are a couple of issues, which are summarised in this paper StateAct (https://arxiv.org/abs/2410.02810).

Specifically the issues are:

  1. Different gpt models have very different performance. (With older models often performing much better).
  2. Secondly, what @CSUN1997 mentions is also mentioned in StateAct as "Correction". Because the GPT models often produce put <object> in <place>, however, the correct alfworld syntax is put <object> in/on <place>.