kengz / SLM-Lab

Modular Deep Reinforcement Learning framework in PyTorch. Companion library of the book "Foundations of Deep Reinforcement Learning".
https://slm-lab.gitbook.io/slm-lab/
MIT License
1.24k stars 264 forks source link

Resume a training #444

Closed ingambe closed 4 years ago

ingambe commented 4 years ago

Are you requesting a feature or an implementation? I would like to know if it is possible to load a previously trained experiment and continue the training (i.e. load the neural network and start a new training with the previously trained neural network as the initial network) This would be useful in case our previous experiment didn't reach a plateau with the previously assigned number of step or maybe in order to use a reuse a previously trained neural network for a similar task

If you have any suggested solutions Add a "resume_training" mode in order to continue the training Or add the possibility to load a neural net model in a training spec

ingambe commented 4 years ago

I've already started to work on this here This is just a PoC for the moment and it needs, of course, to be unit tested But before I go any further, I would like to know if my approach is correct and makes sense.

Approach

I've defined a new meta parameter called load_net Which allows me to load a network in the same idea as model_prepath for enjoy spec (in the future maybe it would make more sense to have a relative path to the .pt model) Then, when SLM tries to load the algorithm, if he found the load_net key in the agent spec, he loads it

"Test"

In order to check if this approach work, I have modify the demo spec (DQN Cartpole) in order to split it into two parts, one part which trains until 3000 frames and the other part which loads the previous network and train for another 7000 frames.

One-shot training

dqn_cartpole_t0_trial_graph_mean_returns_ma_vs_frames Using the "demo.json" for 10 000 frames, we end up reaching approximatly 130 as the mean return. Here is the result of the experiment: one_shot.zip

Two-shot training

In order to see better the improvement and as the number of frames is reduced, the evaluation and log frequency as been dropped from 500 to 100.

First part

dqn_cartpole_t0_trial_graph_mean_returns_ma_vs_frames As we can see we start from a low mean return (20) and reach 70 after 3000 frames. Here is the result of the experiment: two_shot_first_part.zip

Second part

dqn_cartpole_t0_trial_graph_mean_returns_ma_vs_frames As we can see start at 70 (previous mean returns) and reach 140 after 7000 frames. Here is the result of the experiment: two_shot_second_part.zip

kengz commented 4 years ago

Hi @ingambe thanks for looking at this, and the fantastic showcase above!

We did not implement a resume function yet but it should be relatively simple and clean, since the enjoy mode already does most of what's needed for resuming in that it loads the necessary things except a few. Here are a few requirements for doing so to ensure overall consistency:

  1. the command could work like train@{prename}, like how enjoy works. With this we don't introduce extra command and modify the core logic in code. The prename means the saved folder for a trial, and that contains a session.
  2. what to save and load: saving already saves everything required for resuming. Loading is the main thing to take care of here:
    • like normal construction of a trial, it should take the prename to a trial folder and load for all sessions. This can reuse the logic in retro_analysis.
    • pytorch model loading: already done in the resume function
    • load the body dataframes likebody.train_df and body.eval_df
    • set the environment's clock using the max frame in body.train_df. Once the clock is set, everything will be set in motion correctly because the lab follows a clock consistently. This means the graph plotting, learning rate decay, reporting etc. will resume like there was no interruption before.
    • one thing that's currently not saved but required is the agent's memory, but this can easily take up to over 7Gb per session.
  3. pass the CI build

If you open a PR I can also work with you to get the steps above implemented, or if you prefer to wait a bit I can also get to it sometime this week/next.

ingambe commented 4 years ago

Hi @kengz

I've created a draft pull request #445 in order to work together on this. As you suggest, I've created the train@{previous_experience} command, but I've combined it with the meta argument previously defined (because it may be good to be able to load a network from outside the lab in order to use transfer learning ? What do you think ?) and I'm struggling with the data frames loading and agent memory.

Thank you for your help

kengz commented 4 years ago

Implemented resume mode, see the linked PR above.