I am using MPE recently (simple adversary) and thanks for this great environment!
However, when I am trying to understand agents' observations, I find it not consistent with what is introduced in the documentation (https://pettingzoo.farama.org/environments/mpe/simple_adversary/):
_Agent observation space: [self_pos, self_vel, goal_rel_position, landmark_rel_position, other_agent_relpositions]
Could you please double check the documentation and code since this disagreement might be confusing :)
But again, MPE is very great and thanks for your great efforts maintaining it.
Question
Hi,
I am using MPE recently (simple adversary) and thanks for this great environment!
However, when I am trying to understand agents' observations, I find it not consistent with what is introduced in the documentation (https://pettingzoo.farama.org/environments/mpe/simple_adversary/): _Agent observation space: [self_pos, self_vel, goal_rel_position, landmark_rel_position, other_agent_relpositions]
In low-level code implementation, what users actually obtain is from the function here https://github.com/Farama-Foundation/PettingZoo/blob/master/pettingzoo/mpe/simple_adversary/simple_adversary.py#L229 in which all returned observation (for agent) is
[relative_pos_with_goal, relative_pos_with_landmarks, relative_pos_with_other_agents]
Could you please double check the documentation and code since this disagreement might be confusing :) But again, MPE is very great and thanks for your great efforts maintaining it.
Best, Shuo