araffin / rl-baselines-zoo

A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
https://stable-baselines.readthedocs.io/
MIT License
1.13k stars 208 forks source link

Hyperparameter tuning fails with Optuna 2.0.0 #97

Closed jatkinson-CRL closed 4 years ago

jatkinson-CRL commented 4 years ago

Describe the bug Hyperparameter tuning fails with Optuna >= v2.0.0.

Code example Install optuna >= v2.0.0. Then, simply try to tune hyperparameters for any algorithm/environment:

python train.py --algo ppo2 --env CartPole-v1 --optimize --n-trials 1000 --n-jobs 6 --sampler tpe --pruner median --verbose 1 --n-timesteps 1000

This almost immediately throws the error:

...
File "~/rl-baselines-zoo/utils/callbacks.py", line 31, in _on_step
    if self.trial.should_prune(self.eval_idx):
TypeError: should_prune() takes 1 positional argument but 2 were given

Taking a look at the optuna code, should_prune() in version 2.0.0 (https://github.com/optuna/optuna/blob/v2.0.0/optuna/trial/_trial.py#L570) does not have an optional step argument anymore. In versions <=1.5.0, should_prune() has an optional step argument (https://github.com/optuna/optuna/blob/v1.5.0/optuna/trial/_trial.py#L538).

Downgrading to optuna <= v1.5.0 indeed fixes this error. Thanks for the great package!

System Info Describe the characteristic of your environment:

araffin commented 4 years ago

Hello, Thanks for reporting the bug, Optuna 2.0 was released very recently.

I pushed a bug fix on the SB3 version here: https://github.com/DLR-RM/rl-baselines3-zoo/pull/36

Will do the same here soon.

jatkinson-CRL commented 4 years ago

Thanks!