LucasAlegre / sumo-rl

Reinforcement Learning environments for Traffic Signal Control with SUMO. Compatible with Gymnasium, PettingZoo, and popular RL libraries.
https://lucasalegre.github.io/sumo-rl
MIT License
727 stars 197 forks source link

Documentation for Different Environments #44

Open jkterry1 opened 3 years ago

jkterry1 commented 3 years ago

Hey,

I was planning to explore using a handful of these environments as a part of my research. However, unless I'm missing something, there's no explanation or visuals of the mechanics or behaviors of the different environments/maps? Is that the case, and if so would you be willing to take an hour to add it to the readme or something? It'd be super helpful for those potentially interested in your environment.

LucasAlegre commented 3 years ago

Hi,

I'm glad that you are interested in using sumo-rl! Sure, I could definitely do that. Do you any anything specific in mind? Maybe describe the default definition of states and rewards? Notice that SumoEnvironment is generic and can be instantiated with any .net and .rou SUMO files. Also, you can visualize the networks directly on SUMO.

jkterry1 commented 3 years ago

"Maybe describe the default definition of states and rewards?" That, plus action and observation spaces and images of what each look like would work, ya :)

LucasAlegre commented 3 years ago

I just updated the readme with the basic definitions, but I plan to add more details later!

jkterry1 commented 3 years ago

Hey, I just sat down and look at this. I've used who's fairly experienced in the RL (and I wanted to use these environments as part of a set of of many to test a general MARL algorithm I've been working on), but I'm not very experienced with traffic control/sumo so I have a few questions after reading:

-What does phase_one_hot mean? -What does lane_1_queue mean? -What does green phase mean? -Could you please document the action space too? -Could you elaborate a bit on why that specific reward function makes sense is the default? Is that the standard in the literature? -Also, your new links to TrafficSignal are dead

LucasAlegre commented 3 years ago

Hey, I believe I have answered these question in this commit f0b387fb3fdb9e8432fa81b42dee61af08402f65. (Also fixed the dead links)

Regarding the reward function, there is not really a standard in the literature. Change in delay/waiting time is what in my experience worked the best. I can point you to some papers that use this reward:

I have seen many papers using Pressure as reward (but I didn't get better results with this):

jkterry1 commented 3 years ago

Hey thanks a ton for that!

A few more questions:

LucasAlegre commented 3 years ago

Hey thanks a ton for that!

A few more questions:

  • You have a sentence "Obs: Every time a phase change occurs, the next phase is preeceded by a yellow phase lasting yellow_time seconds.". Either that's in the wrong section or I'm very confused.

Ops, this "Obs:" means "Ps:" :P This means that when your action changes the phase, the env sets a yellow phase before actually setting the phase selected by the agent's action.

  • I'm sure this is simply due to my unfamiliarity, but what's a "green phase"?

The nomenclature for traffic signal control can be a bit confusing. By green phase I mean a phase configuration presenting green (permissive) movements. The 4 actions in the readme are examples of 4 green phases.

  • Would you also be willing to also clarify what the different built in nets are like in the readme? That'd also be super helpful

Sure! I also intended to add more networks to the repository.

ahphan commented 3 years ago

Hello,

I am really new to SUMO, but is there a way to deploy a trained agent with sumo-gui?

I was able to run the example experiments/ql_2way-single-intersection.py and plot the results. Then I tried to run "python experiments/ql_2way-single-intersection.py -gui", which provided visualizations in sumo-gui, but the terminal window wasn't updating the step number (usually increments to 100,000) so I'm not sure if this is actually training and visualizing at the same time.

In summary, I would like to know if I can save the trained agent, deploy it in an environment, and visualize it in sumo-gui. Also, when I use the "-gui" argument, is this still training the agent as it normally would if I ran "python experiments/ql_2way-single-intersection.py", but it just doesn't update the step number?

I really appreciate your contributions, thank you!

LucasAlegre commented 3 years ago

Hello,

I am really new to SUMO, but is there a way to deploy a trained agent with sumo-gui?

I was able to run the example experiments/ql_2way-single-intersection.py and plot the results. Then I tried to run "python experiments/ql_2way-single-intersection.py -gui", which provided visualizations in sumo-gui, but the terminal window wasn't updating the step number (usually increments to 100,000) so I'm not sure if this is actually training and visualizing at the same time.

In summary, I would like to know if I can save the trained agent, deploy it in an environment, and visualize it in sumo-gui. Also, when I use the "-gui" argument, is this still training the agent as it normally would if I ran "python experiments/ql_2way-single-intersection.py", but it just doesn't update the step number?

I really appreciate your contributions, thank you!

Hi,

Using -gui only activates the SUMO GUI, there is no effect on the training procedure. Notice that training is part of the algorithm (not the environment), so you can use any algorithm you want, save the model and then run again with sumo-gui to visualize it. In the ql example I did not implement a method to save the agent q-tables, but that should be easy to do.

LucasAlegre commented 3 years ago

@jkterry1 I just added network and route files from RESCO (check the readme). Basically, RESCO is a set of benchmarks for traffic signal control that was built on top of SUMO-RL. In their paper you can find results for different algorithms. Later this week I'll try to add more documentation and examples for these networks.

jkterry1 commented 3 years ago

Hey, it's been a week so I'm just following up on this :)

LucasAlegre commented 3 years ago

Hey, it's been a week so I'm just following up on this :)

Hey, I have just added an API to instantiate a few environments in the file https://github.com/LucasAlegre/sumo-rl/blob/master/sumo_rl/environment/resco_envs.py !