Closed lordemyj closed 1 month ago
Hi, sorry for the late reply. EP and FP are first introduced in the MAPPO paper (Figure 4). EP stands for environment-provided global state and it provides the same global state input to the critic for all actors. FP is the agent-specific global state and it produces different global state inputs to the critic for all actors. Thus, for the data related to critic in FP, we always have an extra dimension of n_agents
to maintain different inputs. As for the rewards, since we consider the fully cooperative scenarios, all agents' rewards are the total reward they receive. Thus, in EP we only save the reward of the first agent, while in FP we save the rewards of all agents for the convenience of data processing.
When self.state_type == "EP", why is only the reward of the first agent taken rewards[:, 0], and why the reward of the second agent ignored?