cyanrain7 / TRPO-in-MARL

MIT License
186 stars 49 forks source link

Confused about the results of IPPO and MAPPO. #15

Closed guojm14 closed 2 years ago

guojm14 commented 2 years ago

I notice that in your code the multiagent mujoco environment is an MDP setting. Thus, the inputs of critics of IPPO and MAPPO are the same. I expect the performances to be similar but the results in the figure are not. Are there other factors I'm ignoring? I am looking forward to your reply. Thank you!

langfengQ commented 2 years ago

I notice that in your code the multiagent mujoco environment is an MDP setting. Thus, the inputs of critics of IPPO and MAPPO are the same. I expect the performances to be similar but the results in the figure are not. Are there other factors I'm ignoring? I am looking forward to your reply. Thank you!

Hi, I have the same confusion, have you figured it out?

guojm14 commented 2 years ago

I notice that in your code the multiagent mujoco environment is an MDP setting. Thus, the inputs of critics of IPPO and MAPPO are the same. I expect the performances to be similar but the results in the figure are not. Are there other factors I'm ignoring? I am looking forward to your reply. Thank you!

Hi, I have the same confusion, have you figured it out?

Not yet. (ToT) Waiting for the author's reply.

cyanrain7 commented 2 years ago

I notice that in your code the multiagent mujoco environment is an MDP setting. Thus, the inputs of critics of IPPO and MAPPO are the same. I expect the performances to be similar but the results in the figure are not. Are there other factors I'm ignoring? I am looking forward to your reply. Thank you!

Hi, I have the same confusion, have you figured it out?

Not yet. (ToT) Waiting for the author's reply.

Our IPPO baseline following the origin paper setting, i.e. independent learning (not CTDE), and POMDP. If you want to reproduce this result, for not CTDE setting, you can set use_centralized_V as False. If you want modify input of policy as local observation, you can modify the function get_obs() in mujoco_multi.py line 156, uncomment line 156 and comment line 157. Hope this can help you!

guojm14 commented 2 years ago

Thanks very much for your reply! However, I'm still confused. For multi-agent mujoco, the observation is the same as the state #https://github.com/cyanrain7/TRPO-in-MARL/issues/11, then use_centralized_V makes no difference. Then I want to confirm if the difference between IPPO and MAPPO in your code is just the setting of use_centralized_V as https://github.com/marlbenchmark/on-policy? If this is true, could I expect the performances to be similar? Looking forward to your reply~

cyanrain7 commented 2 years ago

Thanks very much for your reply! However, I'm still confused. For multi-agent mujoco, the observation is the same as the state ##11, then use_centralized_V makes no difference. Then I want to confirm if the difference between IPPO and MAPPO in your code is just the setting of use_centralized_V as https://github.com/marlbenchmark/on-policy? If this is true, could I expect the performances to be similar? Looking forward to your reply~

No, to follow the origin paper, in IPPO we use POMDP setting, if you want to set this, you can modify the function get_obs() in mujoco_multi.py line 156, uncomment line 156 and comment line 157. Hope this can help you!