edbeeching / godot_rl_agents

An Open Source package that allows video game creators, AI researchers and hobbyists the opportunity to learn complex behaviors for their Non Player Characters or agents
MIT License
942 stars 69 forks source link

ray[rllib] does not work in godotrl 0.6.0 #139

Closed visuallization closed 1 year ago

visuallization commented 1 year ago

ray[rllib] 2.2.0 does not currently work in godotrl 0.6.0 as it is still expecting an old gym wrapper but we upgraded our godot env to gymnasium. I tried upgrading to ray[rllib] 2.6.0 (which uses gymnasium instead o gym) but here we have a very minor version missmatch which makes the installation fail. More specfifically sb3 depends on gymnasium==0.28.1 and ray depends on gymnasium==0.26.3.

image

I would love to host ray package on a separate github fork and loosen the gymnasium version to fix this but the ray project seems to be pretty big and a simple pip install from git does not work.

Any ideas how we can tackle this issue? @edbeeching @Ivan-267

edbeeching commented 1 year ago

Thanks for letting me know. I think the only solution is to have separate installs:

Then we can specify the rllib deps separately. What do you think?

Ivan-267 commented 1 year ago

Thanks for letting me know. I think the only solution is to have separate installs:

* `pip install godot-rl` for sb3

* `pip install godot-rl[sf]` for sb3 + sample-factory

* `pip instlal godot-rl[rllib]` for rllib (no sb3)

Then we can specify the rllib deps separately. What do you think?

Personally I think this is fine. I assume the issue is with the all extra, which may already not work for me on Windows due to Sample Factory (didn't try using all yet).

For my use case, not having all would not make too much of a difference, as multiple options can be installed when needed (unless there's a conflict). It will always be possible for one framework to use a dependency not compatible with another framework in the future as well, so all could cause conflicts in the future even if this one was resolved.

For quickly switching between frameworks for training, different virtual environments can still be used.

visuallization commented 1 year ago

@edbeeching @Ivan-267 Okay I agree with you guys. It would have been nice to make every rl backend available in one single environment like it was before, but even if it would work now it could still break with another update. So I guess splitting it up saves us a lot of future headache. Let's go with that solution. :)