Offline Multi-Agent Reinforcement Learning Datasets and Baselines
Offline MARL holds great promise for real-world applications by utilising static datasets to build decentralised controllers of complex multi-agent systems. However, currently offline MARL lacks a standardised benchmark for measuring meaningful research progress. Off-the-Grid MARL (OG-MARL) fills this gap by providing a diverse suite of datasets with baselines on popular MARL benchmark environments in one place, with a unified API and an easy-to-use set of tools.
OG-MARL forms part of the InstaDeep MARL ecosystem, developed jointly with the open-source community. To join us in these efforts, reach out, raise issues or just ๐ to stay up to date with the latest developments! ๐ข You can contribute to the conversation around OG-MARL in the Discussion tab. Please don't hesitate to leave a comment. We will be happy to reply.
๐ข We recently moved our datasets to Hugging Face. This means that previous download links for the datasets may no longer work. Datasets can now be downloaded directly from Hugging Face.
Clone this repository.
git clone https://github.com/instadeepai/og-marl.git
Install og-marl
and its requirements. We tested og-marl
with Python 3.10 and Ubuntu 20.04. Consider using a conda
virtual environment.
pip install -e .[tf2_baselines]
Download environment files. We will use SMACv1 in this example. MAMuJoCo installation instructions are included near the bottom of the README.
bash install_environments/smacv1.sh
Download environment requirements.
pip install -r install_environments/requirements/smacv1.txt
Train an offline system. In this example we will run Independent Q-Learning with Conservative Q-Learning (iql+cql). The script will automatically download the neccessary dataset if it is not found locally.
python og_marl/tf2_systems/offline/iql_cql.py task.source=og_marl task.env=smac_v1 task.scenario=3m task.dataset=Good
You can find all offline systems at og_marl/tf2_systems/offline/
and they can be run similarly. Be careful, some systems only work on discrete action space environments and vice versa for continuous action space environments. The config files for systems are found at og_marl/tf2_systems/offline/configs/
. We use hydra for our config management. Config defaults can be overwritten as command line arguments like above.
To quickly start working with a dataset you do not even need to install og-marl
.
Simply install Flashbax and download a dataset from Hugging Face.
pip install flashbax
Then you should be able to do something like this.
from flashbax.vault import Vault
import jax
import numpy as np
vault = Vault("og_marl/smac_v1/2s3z.vlt", vault_uid="Good")
experience = vault.read().experience
numpy_experience = jax.tree.map(lambda x: np.array(x), experience)
We also provide a simple demonstrative notebook of how to use OG-MARL's dataset API here:
We have generated datasets on a diverse set of popular MARL environments. A list of currently supported environments is included in the table below. It is well known from the single-agent offline RL literature that the quality of experience in offline datasets can play a large role in the final performance of offline RL algorithms. Therefore in OG-MARL, for each environment and scenario, we include a range of dataset distributions including Good
, Medium
, Poor
and Replay
datasets in order to benchmark offline MARL algorithms on a range of different dataset qualities. For more information on why we chose to include each environment and its task properties, please read our accompanying paper.
Our datasets are now hosted on Hugging Face for improved accessibility for the community: https://huggingface.co/datasets/InstaDeepAI/og-marl
โ ๏ธ Some datasets have yet to be converted to the new dataset format (Vault). For available datasets, please refer to
og_marl/vault_utils/download_vault.py
or the Hugging Face datasets repository.
Environment | Scenario | Agents | Act | Obs | Reward | Types | Repo |
---|---|---|---|---|---|---|---|
๐ซSMAC v1 | 3m 8m 2s3z 5m_vs_6m 27m_vs_30m 3s5z_vs_3s6z 2c_vs_64zg |
3 8 5 5 27 8 2 |
Discrete | Vector | Dense | Homog Homog Heterog Homog Homog Heterog Homog |
source |
๐ฃSMAC v2 | terran_5_vs_5 zerg_5_vs_5 terran_10_vs_10 |
5 5 10 |
Discrete | Vector | Dense | Heterog | source |
๐ Flatland | 3 Trains 5 Trains |
3 5 |
Discrete | Vector | Sparse | Homog | source |
๐MAMuJoCo | 2x3 HalfCheetah 2x4 Ant 4x2 Ant |
2 2 4 |
Cont. | Vector | Dense | Heterog Homog Homog |
source |
๐ปPettingZoo | Pursuit Co-op Pong |
8 2 |
Discrete Discrete |
Pixels Pixels |
Dense | Homog Heterog |
source |
We recently converted several datasets from prior works to Vaults and benchmarked our baseline algorithms on them. For more information, see our technical report on ArXiv.
Paper | Environment | Scenario | Source |
---|---|---|---|
Pan et al. (2022) | ๐MAMuJoCo | 2x3 HalfCheetah | source |
Pan et al. (2022) | ๐ดMPE | simple_spread | source |
Shao et al. (2023) | ๐ซSMAC v1 | 5m_vs_6m 2s3z 3s_vs_5z 6h_vs_8z |
source |
Wang et al. (2023) | ๐ซSMAC v1 | 5m_vs_6m 6h_vs_8z 2c_vs_64zg corridor |
source |
Wang et al. (2023) | ๐MAMuJoCo | 6x1 HalfCheetah 3x1 Hopper 2x4 Ant |
source |
{"og_marl": {
"smac_v1": {
"3m": ["Good", "Medium", "Poor"],
"8m": ["Good", "Medium", "Poor"],
"5m_vs_6m": ["Good", "Medium", "Poor"],
"2s3z": ["Good", "Medium", "Poor"],
"3s5z_vs_3s6z": ["Good", "Medium", "Poor"],
},
"smac_v2": {
"terran_5_vs_5": ["Replay"],
"terran_10_vs_10": ["Replay"],
"zerg_5_vs_5": ["Replay"],
},
"mamujoco": {
"2halfcheetah": ["Good", "Medium", "Poor"]
},
"gymnasium_mamujoco": {
"2ant": ["Replay"],
"2halfcheetah": ["Replay"],
"2walker": ["Replay"],
"3hopper": ["Replay"],
"4ant": ["Replay"],
"6halfcheetah": ["Replay"],
},
},
"cfcql": {
"smac_v1": {
"6h_vs_8z": ["Expert", "Medium", "Medium-Replay", "Mixed"],
"3s_vs_5z": ["Expert", "Medium", "Medium-Replay", "Mixed"]
"5m_vs_6m": ["Expert", "Medium", "Medium-Replay", "Mixed"]
"2s3z": ["Expert", "Medium", "Medium-Replay", "Mixed"]
},
},
"alberdice": {
"rware": {
"small-2ag": ["Expert"],
"small-4ag": ["Expert"],
"small-6ag": ["Expert"],
"tiny-2ag": ["Expert"],
"tiny-4ag": ["Expert"],
"tiny-6ag": ["Expert"],
},
},
"omar": {
"mpe": {
"simple_spread": ["Expert", "Medium", "Medium-Replay", "Random"]
"simple_tag": ["Expert", "Medium", "Medium-Replay", "Random"]
"simple_world": ["Expert", "Medium", "Medium-Replay", "Random"]
},
"mamujoco": {
"2halfcheetah": ["Expert", "Medium", "Medium-Replay", "Random"]
},
},
"omiga": {
"smac_v1": {
"2c_vs_64zg": ["Good", "Medium", "Poor"],
"6h_vs_8z": ["Good", "Medium", "Poor"],
"5m_vs_6m": ["Good", "Medium", "Poor"],
"corridor": ["Good", "Medium", "Poor"],
},
"mamujoco": {
"6halfcheetah": ["Expert", "Medium", "Medium-Expert", "Medium-Replay"],
"2ant": ["Expert", "Medium", "Medium-Expert", "Medium-Replay"],
"3hopper": ["Expert", "Medium", "Medium-Expert", "Medium-Replay"],
},
},
}
The OG-MARL datasets use the latest version of MuJoCo (210). While the OMIGA and OMAR datasets use an older version (200). They each have different instalation instructions and should be installed in seperate virtual environments.
bash install_environments/mujoco210.sh
pip install -r install_environments/requirements/mujoco.txt
pip install -r install_environments/requirements/mamujoco210.txt
bash install_environments/mujoco200.sh
pip install -r install_environments/requirements/mujoco.txt
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:~/.mujoco/mujoco200/bin
pip install -r install_environments/requirements/mamujoco200.txt
InstaDeep's MARL ecosystem in JAX. In particular, we suggest users check out the following sister repositories:
Related. Other libraries related to accelerated MARL in JAX.
If you use OG-MARL Datasets in your work, please cite the library using:
@inproceedings{formanek2023ogmarl,
author = {Formanek, Claude and Jeewa, Asad and Shock, Jonathan and Pretorius, Arnu},
title = {Off-the-Grid MARL: Datasets and Baselines for Offline Multi-Agent Reinforcement Learning},
year = {2023},
publisher = {AAMAS},
booktitle = {Extended Abstract at the 2023 International Conference on Autonomous Agents and Multiagent Systems},
}
The development of this library was supported with Cloud TPUs from Google's TPU Research Cloud (TRC) ๐ค.