SUMO-RL provides a simple interface to instantiate Reinforcement Learning (RL) environments with SUMO for Traffic Signal Control.
Goals of this repository:
The main class is SumoEnvironment. If instantiated with parameter 'single-agent=True', it behaves like a regular Gymnasium Env. For multiagent environments, use env or parallel_env to instantiate a PettingZoo environment with AEC or Parallel API, respectively. TrafficSignal is responsible for retrieving information and actuating on traffic lights using TraCI API.
For more details, check the documentation online.
sudo add-apt-repository ppa:sumo/stable
sudo apt-get update
sudo apt-get install sumo sumo-tools sumo-doc
Don't forget to set SUMO_HOME variable (default sumo installation path is /usr/share/sumo)
echo 'export SUMO_HOME="/usr/share/sumo"' >> ~/.bashrc
source ~/.bashrc
Important: for a huge performance boost (~8x) with Libsumo, you can declare the variable:
export LIBSUMO_AS_TRACI=1
Notice that you will not be able to run with sumo-gui or with multiple simulations in parallel if this is active (more details).
Stable release version is available through pip
pip install sumo-rl
Alternatively, you can install using the latest (unreleased) version
git clone https://github.com/LucasAlegre/sumo-rl
cd sumo-rl
pip install -e .
The default observation for each traffic signal agent is a vector:
obs = [phase_one_hot, min_green, lane_1_density,...,lane_n_density, lane_1_queue,...,lane_n_queue]
phase_one_hot
is a one-hot encoded vector indicating the current active green phasemin_green
is a binary variable indicating whether min_green seconds have already passed in the current phaselane_i_density
is the number of vehicles in incoming lane i dividided by the total capacity of the lanelane_i_queue
is the number of queued (speed below 0.1 m/s) vehicles in incoming lane i divided by the total capacity of the laneYou can define your own observation by implementing a class that inherits from ObservationFunction and passing it to the environment constructor.
The action space is discrete. Every 'delta_time' seconds, each traffic signal agent can choose the next green phase configuration.
E.g.: In the 2-way single intersection there are |A| = 4 discrete actions, corresponding to the following green phase configurations:
Important: every time a phase change occurs, the next phase is preeceded by a yellow phase lasting yellow_time
seconds.
The default reward function is the change in cumulative vehicle delay:
That is, the reward is how much the total delay (sum of the waiting times of all approaching vehicles) changed in relation to the previous time-step.
You can choose a different reward function (see the ones implemented in TrafficSignal) with the parameter reward_fn
in the SumoEnvironment constructor.
It is also possible to implement your own reward function:
def my_reward_fn(traffic_signal):
return traffic_signal.get_average_speed()
env = SumoEnvironment(..., reward_fn=my_reward_fn)
If your network only has ONE traffic light, then you can instantiate a standard Gymnasium env (see Gymnasium API):
import gymnasium as gym
import sumo_rl
env = gym.make('sumo-rl-v0',
net_file='path_to_your_network.net.xml',
route_file='path_to_your_routefile.rou.xml',
out_csv_name='path_to_output.csv',
use_gui=True,
num_seconds=100000)
obs, info = env.reset()
done = False
while not done:
next_obs, reward, terminated, truncated, info = env.step(env.action_space.sample())
done = terminated or truncated
For multi-agent environments, you can use the PettingZoo API (see Petting Zoo API):
import sumo_rl
env = sumo_rl.parallel_env(net_file='nets/RESCO/grid4x4/grid4x4.net.xml',
route_file='nets/RESCO/grid4x4/grid4x4_1.rou.xml',
use_gui=True,
num_seconds=3600)
observations = env.reset()
while env.agents:
actions = {agent: env.action_space(agent).sample() for agent in env.agents} # this is where you would insert your policy
observations, rewards, terminations, truncations, infos = env.step(actions)
In the folder nets/RESCO you can find the network and route files from RESCO (Reinforcement Learning Benchmarks for Traffic Signal Control), which was built on top of SUMO-RL. See their paper for results.
Check experiments for examples on how to instantiate an environment and train your RL agent.
python experiments/ql_single-intersection.py
python experiments/ppo_4x4grid.py
Obs: you need to install stable-baselines3 with pip install "stable_baselines3[extra]>=2.0.0a9"
for Gymnasium compatibility.
python experiments/dqn_2way-single-intersection.py
python outputs/plot.py -f outputs/4x4grid/ppo_conn0_ep2
If you use this repository in your research, please cite:
@misc{sumorl,
author = {Lucas N. Alegre},
title = {{SUMO-RL}},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/LucasAlegre/sumo-rl}},
}
List of publications that use SUMO-RL (please open a pull request to add missing entries):