LucasAlegre / sumo-rl

Reinforcement Learning environments for Traffic Signal Control with SUMO. Compatible with Gymnasium, PettingZoo, and popular RL libraries.
https://lucasalegre.github.io/sumo-rl
MIT License
706 stars 193 forks source link

experiments/a3c_4x4grid.py can't support ray2.0 #110

Closed jiamlu closed 1 year ago

jiamlu commented 2 years ago

my environment: sumo_rl ray 2.0.0

I modified the 10,11,32 line as : from ray.rllib.agents.a3c.a3c import A3CTrainer from ray.rllib.agents.a3c.a3c import a3c_torch_policy '0': (a3c_torch_policy, spaces.Box(low=np.zeros(11), high=np.ones(11)), spaces.Discrete(2), {})

But the program will still report an error: TypeError: cannot pickle 'module' object

Can sumo-rl better support ray-rllib, especially multi-agent algorithm?

TrinhTuanHung2021 commented 2 years ago

In the past, I can run it. But now I had errors after installing the latest module of sumo-rl. Maybe we have to downgrade ray and gym to run it

locker2153 commented 2 years ago

I had the same problem and I tried to downgrade ray. But it doesn't seem to work either. So what was the version of ray you were using before?

locker2153 commented 2 years ago

过去,我可以运行它。 但是现在我在安装了 sumo-rl 的最新模块后出现了错误。 也许我们必须降级raygym运行它

New errors appear after ray is downgraded. AttributeError: module ‘gym.wrappers‘ has no attribute ‘Monitor‘ This is because the API was changed in the new version and the Monitor was deleted from the wrappers However, sumorl has requirements for the gym version. After the gym is downgraded, there will still be new problems.

LucasAlegre commented 1 year ago

Hi @jiamlu ,

I've updated the examples using rllib and added instructions on how to use rllib and stable-baselines3 with Gymnasium in the README. Please let me know if there are more issues.