HumanCompatibleAI / adversarial-policies

Find best-response to a fixed policy in multi-agent RL
MIT License
275 stars 47 forks source link

Checkpointing support with ray Tune #5

Open AdamGleave opened 5 years ago

AdamGleave commented 5 years ago

It would be nice to make modelfree.hyperparams.train_rl a tune.Trainable rather than a function, adding checkpointing support. This would let us use the HyperBand and Population Based Training schedulers. Conceptually this is easy enough: we already supporting saving models via save_callbacks, and can restore using load_path. However, the interfaces don't quite line up: Ray expects _train to perform one small training step, with _save called in between. There's no good way to make Stable Baselines return part-way. We could call it repeatedly with small total_timesteps, but this would make the progress be wrong, breaking annealers.

AdamGleave commented 4 years ago

Add support for this once https://github.com/ray-project/ray/issues/6162 is closed / https://github.com/ray-project/ray/pull/6192 is merged