Closed SammyRamone closed 4 years ago
Hello,
thanks for reporting the issue =)
the tensorboard logging was not meant to be active during hyperparameter optimization... i think this is related to https://github.com/DLR-RM/stable-baselines3/issues/109
If your intention is to deactivate tensorboard logging during hyperparameter optimization, you can just remove the tensorboard log dir here https://github.com/DLR-RM/rl-baselines3-zoo/blob/c9d308103b1e460344f65b1a11e99b113f4a5347/train.py#L414 And maybe add a warning if a tensorboard log dir is provided while optimization
Do you think this would useful for a user to have tensorboard logging active during hyperparameter optimization?
Good question. I just looked at it out of curiosity and to see how many FPS I'm reaching. I think most of the time it's not really necessary and it can easily spam your tensorboard. I don't think it's worth doing a lot of rewriting.
I think most of the time it's not really necessary and it can easily spam your tensorboard. I don't think it's worth doing a lot of rewriting.
ok, then probably discarding the argument and print a warning should be fine.
Describe the bug Hyperparameter optimization breaks the tensorboard logging. When it is active and multiple optimization jobs are running, all datapoints are logged to the last job's tensorboard. While this does not break training itself, it makes the tensorboard log unreadable.
Code example Running the following command will show two logs in the tensorboard, but all values are written to the second one.
System Info Stable Baselines 0.9.0a1 Python 3.7.5 Tensorboard 2.3.0 torch 1.6.0
Additional context I think the issue comes from the logger being a singleton. During normal training with multiple envs it is good that all of them write to the same tensorboard. During optimization, each job calls configure_logger. This overwrites the current tensorboard logger. Resulting in just having one logger at the end with the name of the last job.