openai / maddpg

Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
https://arxiv.org/pdf/1706.02275.pdf
MIT License
1.59k stars 484 forks source link

Question regarding the replay buffers and the Critic networks. (duplicates in the state) #43

Open opt12 opened 4 years ago

opt12 commented 4 years ago

Hello everybody!

As far as I can see from the code, each agent maintains its own replay buffer.

In the training step, when sampling the minibatch, the observations of all agents are collected and concatenated. https://github.com/openai/maddpg/blob/fbba5e45a5086160bdf6d9bfb0074b4e1fd1535e/maddpg/trainer/maddpg.py#L173-L177

As far as I can see, this would lead to duplicates in the state input to the agent's critic function. If there are components of the environment-state which are part of every agent's observation, these components would be contained the critic's input multiple times.

Is this true, or do I miss anything?

Does this (artificial) state expansion have any adverse effects on the critic, or can we safely assume, that the critic will learn quite fast, that the input values at some input nodes are always identical and hence can be treated commonly?

Are there any memory issues due to the multiple storage of the state components in each of the agents' replay buffer? (Probably, memory is not an issue with RL guys, but I have a background in embedded systems)

I would be very grateful for some more insight on this topic.

Regards, Felix