As I work with this code, I find that what wandb records is somewhat different from what I intuitively expect.
When I try to train mqmix with MPE environment, in the directory, 'off-policy/offpolicy/runner/mlp/base_runner.py', the function 'batch_train_q' has a loop to call 'self.trainer.train_policy_on_batch'. In this case, train_policy_on_batch in offpolicy/algorithms/mqmix/mqmix.py will be called, and the global and local q functions will be updated with returning a train_info.
It seems that the train_info labeled with policy ids doesn't represent the differences between different policies, but rather the differences between each training in the loop. And this can also confirmed in wandb as the figures have little difference.
As I work with this code, I find that what wandb records is somewhat different from what I intuitively expect.
When I try to train mqmix with MPE environment, in the directory, 'off-policy/offpolicy/runner/mlp/base_runner.py', the function 'batch_train_q' has a loop to call 'self.trainer.train_policy_on_batch'. In this case, train_policy_on_batch in offpolicy/algorithms/mqmix/mqmix.py will be called, and the global and local q functions will be updated with returning a train_info.
It seems that the train_info labeled with policy ids doesn't represent the differences between different policies, but rather the differences between each training in the loop. And this can also confirmed in wandb as the figures have little difference.