Jiwonjeon9603 / MASER

This repository is an implementation of "MASER: Multi-Agent Reinforcement Learning with Subgoals Generated from Experience Replay Buffer" accepted to ICML 2022.
20 stars 7 forks source link

About the sparse reward setting #7

Closed ziyan-wang98 closed 11 months ago

ziyan-wang98 commented 1 year ago

Hi,

The Run Example in the readme contains "map_print=3m_maser_sparse", which is different from the general pymarl settings. I would like to ask what map_print refers to? And how to test MASER in the sparse setting? At the same time, I checked table1 in the paper and wanted to confirm that the sparse setting tested in the paper is the defult setting of smac?

Jiwonjeon9603 commented 1 year ago

Hi, first of all, you can ignore map_print. We just wanted to print map_name while training.

Second, as you can see in the paper, it is little bit different from original sparse setting of smac. In smac sparse setting, the agents only get reward when they win the whole game. However, in MASER sparse setting, the agents get reward when enemy/ally dies and win the whole game. The dense reward in SMAC is considering ally/enemy's health too. You can see the exact setting in the paper.