Closed jiamlu closed 1 year ago
In the past, I can run it.
But now I had errors after installing the latest module of sumo-rl.
Maybe we have to downgrade ray
and gym
to run it
I had the same problem and I tried to downgrade ray. But it doesn't seem to work either. So what was the version of ray you were using before?
过去,我可以运行它。 但是现在我在安装了 sumo-rl 的最新模块后出现了错误。 也许我们必须降级
ray
并gym
运行它
New errors appear after ray is downgraded. AttributeError: module ‘gym.wrappers‘ has no attribute ‘Monitor‘ This is because the API was changed in the new version and the Monitor was deleted from the wrappers However, sumorl has requirements for the gym version. After the gym is downgraded, there will still be new problems.
Hi @jiamlu ,
I've updated the examples using rllib and added instructions on how to use rllib and stable-baselines3 with Gymnasium in the README. Please let me know if there are more issues.
my environment: sumo_rl ray 2.0.0
I modified the 10,11,32 line as : from ray.rllib.agents.a3c.a3c import A3CTrainer from ray.rllib.agents.a3c.a3c import a3c_torch_policy '0': (a3c_torch_policy, spaces.Box(low=np.zeros(11), high=np.ones(11)), spaces.Discrete(2), {})
But the program will still report an error: TypeError: cannot pickle 'module' object
Can sumo-rl better support ray-rllib, especially multi-agent algorithm?