Algorithm that applies SAC to QMIX for Multi-Agent Reinforcement Learning. Watch the demo here.
SMAC
pytorch (GPU support recommanded while training)
tensorboard
StarCraft II
For the installation of SMAC and StarCraft II, refer to the repository of SMAC.
Train a model with the following command:
python main.py
Configurations and parameters of the training are specified in config.json
. Models will be saved at ./models
Test a trained model with the following command:
python test_model.py
Configurations and parameters of the testing are specified in test_config.json
. Match the run_name
items in config.json
and test_config.json
.
Note that a_i is equivalent to \mu_i and s_i is equivalent to o_i in the architecture schema above.
Train Objective: policies that maximum
Q-values computed by networks:
Individual state-value functions:
Total state-values (alpha is the entropy temperature):
Q-values expressed with Bellman Function:
Critic networks update: minimum
Actor networks update: maximum
Entropy temperatures update: minimum
Note that data of other algorithm are from SMAC paper. Therefore methods of evaluations are kept the same as SMAC paper did (StarCraftII version: SC2.4.6.2.69232).
(Mean of 5 independent runs)
(Mean of 5 independent runs)