Cranial-XIX / marl-copa

PyTorch Implementation of COPA for coordinating teams that can dynamically change.
MIT License
20 stars 8 forks source link
multi-agent-reinforcement-learning

COach-PlAyer (COPA) MARL

This is the implementation for Coach-Player Multi-Agent Reinforcement Learning for Dynamic Team Composition (ICML 2021)

Dependency

Install the multiagent-particle-envs:

pip install -e multiagent-particle-envs/

Experiment

Please see run.sh for examples of running the code

./run.sh

Citation

If you find this work interesting or the repo useful, please consider citing this paper:

@InProceedings{pmlr-v139-liu21m,
  title =    {Coach-Player Multi-agent Reinforcement Learning for Dynamic Team Composition},
  author =       {Liu, Bo and Liu, Qiang and Stone, Peter and Garg, Animesh and Zhu, Yuke and Anandkumar, Anima},
  booktitle =    {Proceedings of the 38th International Conference on Machine Learning},
  pages =    {6860--6870},
  year =     {2021},
  editor =   {Meila, Marina and Zhang, Tong},
  volume =   {139},
  series =   {Proceedings of Machine Learning Research},
  month =    {18--24 Jul},
  publisher =    {PMLR},
  pdf =      {http://proceedings.mlr.press/v139/liu21m/liu21m.pdf},
  url =      {https://proceedings.mlr.press/v139/liu21m.html},
  abstract =     {In real-world multi-agent systems, agents with different capabilities may join or leave without altering the team’s overarching goals. Coordinating teams with such dynamic composition is challenging: the optimal team strategy varies with the composition. We propose COPA, a coach-player framework to tackle this problem. We assume the coach has a global view of the environment and coordinates the players, who only have partial views, by distributing individual strategies. Specifically, we 1) adopt the attention mechanism for both the coach and the players; 2) propose a variational objective to regularize learning; and 3) design an adaptive communication method to let the coach decide when to communicate with the players. We validate our methods on a resource collection task, a rescue game, and the StarCraft micromanagement tasks. We demonstrate zero-shot generalization to new team compositions. Our method achieves comparable or better performance than the setting where all players have a full view of the environment. Moreover, we see that the performance remains high even when the coach communicates as little as 13% of the time using the adaptive communication strategy.}
}