Open Ivan-Zhong opened 10 months ago
Hi! Thanks for positing this!
I am going to have a through read when I have time. As I understand from a quick skim, it seems like a core component of these algorithms is disabling parameter sharing. This kind of paradigm was also introduced in Heterogeneous Multi-Robot Reinforcement Learning (AAMAS 2023). Where the Het prefix is appended to algorithm names to indicate the non-sharing nature.
This is currently available in BenchMARL.
python benchmarl/run.py
/ task=vmas/balance
/ algorithm=mappo
/ experiement.share_policy_params=False # Decides sharing in the policy
/ algorithm.share_param_critic=False # Decides sharing in the critic
We call the algorithm obtained this way HetMAPPO. The same line can be run for all actor-critic algortihms: maddpg
, iddpg
, mappo
, ippo
, masac
, isac
.
python benchmarl/run.py
/ task=vmas/balance
/ algorithm=qmix
/ experiement.share_policy_params=False # Decides sharing in the policy
We call the algorithm obtained this way HetQMIX. The same line can be run for all actor-critic algortihms: qmix
, vdn
, iql
.
Hetrogeneous agent spaces can be used in BenchMARL. In partcular, when you put agents in different groups, they can have any difference you like and even be in competition with other groups. For more info, see the note in this section.
Hello. Thank you for your amazing work. I appreciate the efforts to provide a unified library of MARL algorithms and environments for benchmarking and reproducibility. To better achieve this goal, I suggest integrating HARL algorithms, which achieve SOTA results on various benchmarks and are theoretically underpinned. Their papers have been accepted to JMLR and ICLR 2024 (spotlight). As they represent important advancements in MARL and are now increasingly used as baselines, integrating them should be helpful to the adoption of this library.