blavad / marl

Multi-agent reinforcement learning framework
https://www.david-albert.fr/marl
29 stars 8 forks source link

multiple DQN trained agents for a single environment #1

Open indhra opened 4 years ago

indhra commented 4 years ago

@blavad thanks for the huge repo

hi i have a query

i would like to train two agents with DQN of same environment, but independent of them (agents) is it possible, if so help me out

thanks for the huge repo

indhra commented 4 years ago

its kind of competitive between agent A vs agent B of same environment

blavad commented 4 years ago

Hi @indhra007 . Indeed it is totally possible to train two independant agents with DQN algorithm. All you have to do is to :

  1. Instantiate your OpenAI Gym like environment and get observation and action space for both agents (if they have different observation and/or action space)
    
    import marl
    from marl.agent import DQNAgent

env = my_env()

This part may depend of the implementation of the environment

obs_space1 = env.observation_space[0] act_space1 = env.action_space[0]

obs_space2 = env.observation_space[1] act_space2 = env.action_space[1]


2. Instantiate two DQNAgent with the best training parameters for each. Easiest way to do (with neither custom parameters nor custom model) is as follow:
```python
agent1 = DQNAgent("MlpNet", obs_space1, act_space1)
print("#> Agent 1 :\n", agent1, "\n")

agent2 = DQNAgent("MlpNet", obs_space2, act_space2)
print("#> Agent 2 :\n", agent2, "\n")
  1. Instantiate your multi-agent system for reinforcement learning. If the agents are independant learners, you don't need to use set_mas() function because it means that they don't need to know local information about other agents (i.e. policies of other agents):

    mas = MARL(agents_list=[agent1, agent2])
  2. Train and test your system

    
    # Train the agent for 100 000 timesteps
    mas.learn(env, nb_timesteps=100000)

Test the agent for 10 episodes

mas.test(env, nb_episodes=10)



I hope this will help you. I continue to implement some module for this API and I hope I will have time 
to improve the documentation in order to provide more usefull examples. 
indhra commented 4 years ago

@blavad thanks for quick reply

If possible can u share an environment which has multi agents-with some documentation

blavad commented 4 years ago

@indhra007 sorry for the late answer. In order to avoid problems of importing packages when using notebook, go to the marl directory before installing it. If you are using a Notebook or Google Colab, the following lines should fix the problem:

!git clone https://github.com/blavad/marl.git
%cd marl
!pip install -e .

or

!git clone https://github.com/blavad/marl.git
!cd marl
!pip install -e .

If you are using command line, something as follow should work:

git clone https://github.com/blavad/marl.git
cd marl
pip install -e .
indhra commented 4 years ago

@blavad Any multi agent environment other than soccer, Because of no soccer documentation it is not possible to understand

blavad commented 4 years ago

@indhra007 For the moment I cannot share another well documented environment. I am currently working with another environment (for the game Hanabi) but it is not online yet. You can check to this section of the documentation (https://blavad.github.io/marl/html/quickstart/environment.html) for a brief review of how to build an adequate environment.

I will let you know as soon as I make a repo with some multi-agent environments.

indhra commented 4 years ago

Ok Does the soccer environment works?