abhisheknaik96 / MultiAgentTORCS

The multi-agent version of TORCS for developing control algorithms for fully autonomous driving in the cluttered, multi-agent settings of everyday life.
141 stars 32 forks source link

Changes to allow learning for multiple agents together #6

Open MehaKaushik opened 6 years ago

MehaKaushik commented 6 years ago

This PR:

  1. Fixes the issue of opening of multiple TORCS windows, in the beginning of episodes.
  2. Fixes code crash when multiple agents are trained together.
  3. Introduces a new script multi_ddpg, where multiple agents learn using DDPG.
  4. State vector is changed from size 29 to 65, information about the opponents is included. This is the reading of 36 sensors on the vehicle, which correspond to the closest distance of any obstacle in line of that sensor.
  5. Now weights are stored for each agent in the folder weights/port_number after every 300 episode.
  6. Weights of training for 3 agents when trained together are present in weights.
  7. autostart.sh, has been added with extra commented lines for speeding up the TORCS simulator.

How to run:

  1. Run scripts/startTorcs.sh in a separate terminal.
  2. Run multi_ddpg.py in another terminal. Set the number of workers in the file multi_ddpg.py, in TORCS simulator select that many number of scr_server.
  3. In script/autostart.sh change the command "torcs" to "cd path_of_installation & ./torcs" if simply "torcs" does not launch TORCS for you.

ToDo:

  1. Readme to be changed once this PR is merged.
  2. In snakeoil3_gym.py n_fail needs to be adjusted as per the number of agents trained. For 6 agents, 10 was a suitable number.
  3. Some cleanups done previously are lost, need to redo them.
  4. Everytime a new episode starts, there is a redundant start where only the first client connects and rest cannot. This does not create any issues in training, but is not a good user experience, need to work this out.