LucasAlegre / sumo-rl

Reinforcement Learning environments for Traffic Signal Control with SUMO. Compatible with Gymnasium, PettingZoo, and popular RL libraries.
https://lucasalegre.github.io/sumo-rl
MIT License
746 stars 201 forks source link

Environment sumo-rl doesn't exist. #114

Closed kurkurzz closed 2 years ago

kurkurzz commented 2 years ago
import gym
import sumo_rl
env = gym.make('sumo-rl-v0',
                net_file='path_to_your_network.net.xml',
                route_file='path_to_your_routefile.rou.xml',
                out_csv_name='path_to_output.csv',
                use_gui=True,
                num_seconds=100000)

I got the error when running the code provided in the documentation. I already:

Do I need to register sumo-rl env somewhere?

firemankoxd commented 2 years ago

I've had a same issue, try running python setup.py in sumo-rl folder, hopefully that helps

kurkurzz commented 2 years ago

Running python setup.py will prompt this error:

usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
   or: setup.py --help [cmd1 cmd2 ...]
   or: setup.py --help-commands
   or: setup.py cmd --help

error: no commands supplied

I also tried running python setup.py install, it is still not registered to gym

LucasAlegre commented 2 years ago

The README has instructions on how to install the repo:

git clone https://github.com/LucasAlegre/sumo-rl cd sumo-rl pip install -e .