Open George-Chia opened 3 months ago
Hello! When I run the following command:
"python -m eval_agent.main --agent_config openai --exp_config alfworld --split test --verbose"
I got the results: All tasks done. Output saved to outputs/gpt-3.5-turbo/alfworld Average reward: 0.0299 Success rate: 0.0299.
, which is much lower than the results in Table 2 of the paper. Is there any config that needs to be set?
Hello! When I run the following command:
"python -m eval_agent.main --agent_config openai --exp_config alfworld --split test --verbose"
I got the results: All tasks done. Output saved to outputs/gpt-3.5-turbo/alfworld Average reward: 0.0299 Success rate: 0.0299.
, which is much lower than the results in Table 2 of the paper. Is there any config that needs to be set?