rlberry-py / rlberry

An easy-to-use reinforcement learning library for research and education.
https://rlberry-py.github.io/rlberry
MIT License
161 stars 30 forks source link

Impossible to import A2C without optuna in gymnasium branch #311

Open YannBerthelot opened 1 year ago

YannBerthelot commented 1 year ago

Importing A2C agent from rlberry.agents.torch automatically tries to import optuna (which is not part of rlberry's install dependencies, but of "default" dependencies which are part of extra dependencies) and therefore leads to a bug.

There should be a way to import and use A2C without relying on optuna for basic usage.

Traceback: tests/test_rlberry.py:5: in from rlberry.agents.torch import A2CAgent .venv/lib/python3.8/site-packages/rlberry/init.py:6: in from rlberry.utils.logging import configure_logging .venv/lib/python3.8/site-packages/rlberry/utils/init.py:2: in from .check_agent import ( .venv/lib/python3.8/site-packages/rlberry/utils/check_agent.py:12: in from optuna.samplers import TPESampler E ModuleNotFoundError: No module named 'optuna'

Version

rlberry version : 0.4.0.post30.dev0+9116ac1 installed without extras.

Reproducing

poetry add git+https://github.com/rlberry-py/rlberry#gymnasium >>> from rlberry.agents.torch import A2CAgent

KohlerHECTOR commented 1 year ago

@TimotheeMathieu and I believe that hyperparam optimization might be out of scope for rlberry. Furthermore, it is something difficult to implement. As such, we propose to maybe remove it or at least point to a good tutorial in the documentation. @AleShi94 , you have already done hyperparam optim, do you think we should have it as a feature of rlberry ?