https://github.com/Guojyjy/CoTV/blob/main/CoTV%20demo.mp4
The experiments are conducted on a simulator platform SUMO. The model design and implementation are based on Flow. RLlib is an open-source library for reinforcement learning.
It is highly recommended to install Anaconda that is convenient to set up a specific environment for Flow and its dependencies.
Please download the project. It covers the whole framework of Flow and my model implementation based on Flow.
git clone git@github.com:Guojyjy/CoTV.git
Running the related scripts to create the environment, install Flow and its dependencies requires cd ~/CoTV/flow
, then enter the below commands:
conda env create -f environment.yml
conda activate flow
python setup.py develop # if the conda install fails, try the next command to install the requirements using pip
pip install -e . # install flow within the environment
The Flow documentation provides more installation details: Local installation of Flow.
Please note that the definition of $SUMO_HOME
within the installing process of SUMO would cause an error in the installation of Flow so that please install Flow first.
It is highly recommended to use the installation methods from Downloads-SUMO documentation.
The experiments shown in the paper were conducted on SUMO Version 1.10.0.
The instructions covered in Installing Flow and SUMO from Flow documentation is outdated.
# run the following commands to check the version/location information or load SUMO GUI
which sumo
sumo --version
sumo-gui
Warning: Cannot find local schema '../sumo/data/xsd/types_file.xsd', will try website lookup.
$SUMO_HOME
to $../sumo
instead of $../sumo/bin
ModuleNotFoundError: No module named 'flow'
, ImportError: No module named flow.subpackage
pip install -e .
to install flow within the environment, mentioned at Install FLOW/sumo/bin/
which may cause the error.which python
to check the current usedecho $PATH
to check the order of the directories in the path variable to look for pythonexport PATH=/../anaconda3/env/flow/bin:$PATH
in the file ~/.bashrc
source ~/.bashrc
We have built a docker image to simplify the installation of project.
To run a docker container based on the CoTV docker image:
# first pull the image from docker hub and run a container
# -d, run container in background and print container ID
# --env, --volume, allow to execute sumo-gui
docker run -dit --env="DISPLAY" --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" gjyjy/cotv:latest
# find the container id
docker ps
# interact with the running container in terminal
docker exec -it CONTAINER_ID bash
# exit the container
exit / [ctrl+D]
# stop the container
docker container stop CONTAINER_ID
If having an error about FXApp::openDisplay: unable to open display
when using sumo-gui, the permission of X server host should be adjusted on your local machine:
xhost +local:
Enter the project in the specific environment:
cd ~/CoTV/flow
conda activate flow
# for 1x1 and 1x6 grid maps
python examples/train_ppo.py CoTV_grid --num_steps 150
# for Dublin scenario, i.e., six consecutive intersections, or another extended Dublin scenario covering almost 1km^2
python examples/train_ppo.py CoTV_Dublin
NOTE: CoTV and M-CoTV are implemented and uploaded in another branch M-CoTV. My customized DRL framework, named Coach, supports traffic control under various simulated road scenarios provided by SUMO, meanwhile, simplifying the experiment configuration required on Flow. CoTV can achieve the same level of traffic improvements as running on Flow.
python examples/train_dqn.py PressLight_grid
python examples/train_dqn.py PressLight_Dublin
python examples/train_dqn.py FixedTime_grid
python examples/train_dqn.py FixedTime_Dublin
Implement based on PressLight with specific setting in the modules of flow/examples/exp_configs/rl/multiagent
Static traffic light
env=EnvParams(
additional_params={
"static": True
},
)
python examples/train_dqn.py GLOSA_grid
python examples/train_dqn.py GLOSA_Dublin
Implement based on PressLight with specific setting in the modules of flow/examples/exp_configs/rl/multiagent
python examples/train_ppo.py FlowCAV_grid
python examples/train_ppo.py FlowCAV_Dublin
Road network configuration files for SUMO during the training process in flow/flow/core/kernel/network/debug
Experiment output files set in CoTV/output, according to the emission path in the modules of _flow/examples/expconfigs/rl
CoTV/evaluation/outputFilesProcessing.py filters the output files in CoTV/output
CoTV/evaluation/getResults.py gets traffic statistic
@article{guo2023cotv,
title={CoTV: Cooperative Control for Traffic Light Signals and Connected Autonomous Vehicles Using Deep Reinforcement Learning},
author={Guo, Jiaying and Cheng, Long and Wang, Shen},
journal={IEEE Transactions on Intelligent Transportation Systems},
year={2023},
publisher={IEEE}
}
[1] Wei, Hua, et al. "Presslight: Learning max pressure control to coordinate traffic signals in arterial network." Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019.
[2] Wu, Cathy, et al. "Flow: Architecture and benchmarking for reinforcement learning in traffic control." arXiv preprint arXiv:1710.05465 10 (2017).