Closed guojm14 closed 2 years ago
I notice that in your code the multiagent mujoco environment is an MDP setting. Thus, the inputs of critics of IPPO and MAPPO are the same. I expect the performances to be similar but the results in the figure are not. Are there other factors I'm ignoring? I am looking forward to your reply. Thank you!
Hi, I have the same confusion, have you figured it out?
I notice that in your code the multiagent mujoco environment is an MDP setting. Thus, the inputs of critics of IPPO and MAPPO are the same. I expect the performances to be similar but the results in the figure are not. Are there other factors I'm ignoring? I am looking forward to your reply. Thank you!
Hi, I have the same confusion, have you figured it out?
Not yet. (ToT) Waiting for the author's reply.
I notice that in your code the multiagent mujoco environment is an MDP setting. Thus, the inputs of critics of IPPO and MAPPO are the same. I expect the performances to be similar but the results in the figure are not. Are there other factors I'm ignoring? I am looking forward to your reply. Thank you!
Hi, I have the same confusion, have you figured it out?
Not yet. (ToT) Waiting for the author's reply.
Our IPPO baseline following the origin paper setting, i.e. independent learning (not CTDE), and POMDP. If you want to reproduce this result, for not CTDE setting, you can set use_centralized_V as False. If you want modify input of policy as local observation, you can modify the function get_obs() in mujoco_multi.py line 156, uncomment line 156 and comment line 157. Hope this can help you!
Thanks very much for your reply! However, I'm still confused. For multi-agent mujoco, the observation is the same as the state #https://github.com/cyanrain7/TRPO-in-MARL/issues/11, then use_centralized_V makes no difference. Then I want to confirm if the difference between IPPO and MAPPO in your code is just the setting of use_centralized_V as https://github.com/marlbenchmark/on-policy? If this is true, could I expect the performances to be similar? Looking forward to your reply~
Thanks very much for your reply! However, I'm still confused. For multi-agent mujoco, the observation is the same as the state ##11, then use_centralized_V makes no difference. Then I want to confirm if the difference between IPPO and MAPPO in your code is just the setting of use_centralized_V as https://github.com/marlbenchmark/on-policy? If this is true, could I expect the performances to be similar? Looking forward to your reply~
No, to follow the origin paper, in IPPO we use POMDP setting, if you want to set this, you can modify the function get_obs() in mujoco_multi.py line 156, uncomment line 156 and comment line 157. Hope this can help you!
I notice that in your code the multiagent mujoco environment is an MDP setting. Thus, the inputs of critics of IPPO and MAPPO are the same. I expect the performances to be similar but the results in the figure are not. Are there other factors I'm ignoring? I am looking forward to your reply. Thank you!