hill-a / stable-baselines

A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
http://stable-baselines.readthedocs.io/
MIT License
4.1k stars 727 forks source link

[PPO2] problems resuming training #781

Open k0rean opened 4 years ago

k0rean commented 4 years ago

I'm trying to resume the model training and I'm getting some strange results. Using SubProcVecEnv and VecNormalize on a custom environment:

from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import SubprocVecEnv, VecNormalize
from stable_baselines import PPO2
import os
...

env = SubprocVecEnv([init_env(i) for i in range(n_envs)])

if os.path.exists("ppo/model.zip"): # resume training
    norm_env = VecNormalize.load("ppo/norm_env.p", env)
    model = PPO2.load("ppo/model.zip", norm_env, reset_num_timesteps=False, verbose=0, tensorboard_log="./ppo/logs")
else: # new model
    norm_env = VecNormalize(env, norm_reward=False)
    model = PPO2(MlpPolicy, norm_env, verbose=0, tensorboard_log="./ppo/logs")

model.learn(total_timesteps=2500000)
model.save("ppo/model.zip")
norm_env.save("ppo/norm_env.p")
env.close()

image

Firstly, don't know why it doesn't continue the current tensorboard training curve if I passed reset_num_timesteps=False. Already updated tensorboard to the last version and verified the same behaviour. But the bigger problem is the discontinuity verified between the two runs. Already tried a single run with more timesteps (10e6) and got a continuous improving curve but without reaching a reward of 2.5 as the 2nd run got in this case. The 2nd run reached a bigger reward almost in the beginning but didn't improve anymore. Am I doing some mistake loading the previous model?

System Info

araffin commented 4 years ago

Related: https://github.com/hill-a/stable-baselines/issues/301 https://github.com/hill-a/stable-baselines/issues/692 for continuing the tensorboard log, this is a known plotting bug (I need to find the issue again)

Also, you should use a Monitor wrapper to have access to the original reward, so you can compare runs. The plotted reward is the normalized one, you cannot compare run with it.

Did you try using the rl zoo?

k0rean commented 4 years ago

I looked at that issues but didn't find the solution. That's not critical anyway. I'm not normalizing rewards with VecNormalize, only the observations. So that's not the problem for the discontinuity. No I didn't , why?

njanirudh commented 3 years ago

@k0rean any solution to this problem?

Miffyli commented 3 years ago

@njanirudh I do not have direct answer, but if possible, try out stable-baselines3 and see if it helps with your issues. It is more actively maintained so we can discuss and fix bugs there :)

rambo1111 commented 5 months ago

https://github.com/hill-a/stable-baselines/issues/1192