DLR-RM / rl-baselines3-zoo

A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
https://rl-baselines3-zoo.readthedocs.io
MIT License
2.1k stars 516 forks source link

[Enhancement] Multiple model iterations per Optuna trial and mean performance objective #204

Open seawee1 opened 2 years ago

seawee1 commented 2 years ago

I currently have the problem that, a lot of times, the results Optuna optimization produces are not really too optimal, due to the stochastic nature of RL training. For example, training 3 agents with the same set of hyperparameters can result in 3 completely different learning curves (at least for the environment I'm training on). Might it make sense to implement the optimization code in way, such that for each trial multiple agents are trained, and the mean or median performance is reported to Optuna instead?

Inside utils/exp_manager.py hyperparameter_optimization, line 713, I saw your comment "# TODO: eval each hyperparams several times to account for noisy evaluation". Is that maybe exactly what you mention there?

I already had a look at the code and thought a little bit about how one might be able to do that. If somebody would be interested I could implement it and issue a pull request!

seawee1 commented 2 years ago

Regarding the duplicate tag (you are probably referring to issue #151 ?) I can definitely see your point, but why not implement it and let the user decide via a configurable training script argument.

If implemented correctly, I also don't see why this would hinder the use of pruners. They could work based on mean/median objective performance of current and past trials.

araffin commented 2 years ago

Hello,

sorry for the late reply was on holidays...

Is that maybe exactly what you mention there?

Yes

Regarding the duplicate tag (you are probably referring to issue https://github.com/DLR-RM/rl-baselines3-zoo/issues/151 ?)

yes and that comment: https://github.com/DLR-RM/rl-baselines3-zoo/issues/151#issuecomment-903681089

can definitely see your point, but why not implement it and let the user decide via a configurable training script argument.

I would be happy to have a draft PR ;)

You should also know that this exist: https://github.com/DLR-RM/rl-baselines3-zoo/issues/114

If implemented correctly, I also don't see why this would hinder the use of pruners

How do you prune a trial before the end a run if your objective is the mean/median of several runs?

qgallouedec commented 2 years ago

I can definitely see your point, but why not implement it and let the user decide via a configurable training script argument.

I agree with this. Faced with the same problem, I've already implemented a script that roughly does this. If you open a PR, I would be happy to contribute.

How do you prune a trial before the end a run if your objective is the mean/median of several runs?

By training multiple models simultaneously. Something like

# ...
for split in range(n):
    mean_rewards = []
    for model in models:
        model.learn(split_size, reset_num_timesteps=False)
        mean_reward, _ = evaluate_policy(model, eval_env)
        mean_rewards.append(mean_reward)
    median_score = np.median(mean_rewards)
    trial.report(median_score, split*split_size)

I wonder if you can run, say, 50 or so models simultaneously, without having memory problems or anything.

araffin commented 2 years ago

If you open a PR, I would be happy to contribute.

Please do =)

By training multiple models simultaneously. Something like

I was afraid of that answer... yes it does work but not for image-based environment and requires beefy machine anyway (for instance for DQN on Atari, a single model may require 40GB of RAM). We also need to check if the model.learn(reset_num_timesteps=False) works well with schedules.

50 or so models simultaneously, without having memory problems or anything.

I would run only maximum 3-5 models simultaneously, unless the env is very simple and the network small.

qgallouedec commented 2 years ago

Please do =)

Let's open a draft PR and continue the discussion there.