DLR-RM / rl-baselines3-zoo

A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
https://rl-baselines3-zoo.readthedocs.io
MIT License
1.98k stars 506 forks source link

[feature request] make rl-baselines3-zoo a python package #249

Closed busFred closed 1 year ago

busFred commented 2 years ago

If you have any questions, feel free to create an issue with the tag [question].
If you wish to suggest an enhancement or feature request, add the tag [feature request].
If you are submitting a bug report, please fill in the following details.

Describe the bug I have a project that constructs a ad-hoc module around a trained RL policy. With that being said, I also have my own code space and I would just simply want to load trained policy available in rl-trained-agents. The enjoy.py in rl-baselines3-zoo defined some modules under utils folder to assist the loading and playing procedures. However, since the rl-baselines3-zoo is not a python package, I cannot call those utility modules directly outside the rl-baselines3-zoo folder. I think making the rl-baselines3-zoo as an package that can be installed and called from outside code would significantly increase the code reusability.

Code example N/A

System Info Describe the characteristic of your environment:

Additional context Add any other context about the problem here.

araffin commented 2 years ago

Hello,

so that question was already asked in https://github.com/araffin/rl-baselines-zoo/issues/107 and https://github.com/DLR-RM/rl-baselines3-zoo/issues/53

I guess what you would like to access is the loading utils from utils/ folder? (that is also in charge of wrapping the env).

I think this is good enough use-case to at least add a small setup.py in order to load pretrained agents easily.

However, since the rl-baselines3-zoo is not a python package, I cannot call those utility modules directly outside the rl-baselines3-zoo folder.

well, it is possible by adding those file to the path, but not pretty yes.

Side note: we have now huggingface hub support, which should facilitate export and loading of pre-trained agent, see https://huggingface.co/sb3/ppo-MountainCarContinuous-v0 (with the zoo) or https://huggingface.co/araffin/a2c-LunarLander-v2 (independent of the zoo)