openai / maddpg

Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
https://arxiv.org/pdf/1706.02275.pdf
MIT License
1.59k stars 484 forks source link

DDPG action space vs multi-agent particle environment action space #71

Open medhijk opened 1 year ago

medhijk commented 1 year ago

In the comparison plots, we see MADDPG being compared to DDPG algorithm. As far as I know, DDPG can only be used with continuous action space. But the experiments with multi-agent particle environment like simple_spread and simple_tag has discrete action space. What am I missing here?