hill-a / stable-baselines

A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
http://stable-baselines.readthedocs.io/
MIT License
4.09k stars 728 forks source link

Trying to understand how the LSTM policy works #278

Open Caisho opened 5 years ago

Caisho commented 5 years ago

Dear @erniejunior,

I been trying to trace how the LSTM policy works (with ACER) and its rather confusing. My understanding that the n_steps = lstm sequence length, and so each batch (n_env * n_steps) is fed into the LSTM policy for train_step. However in _Runner.run the self.model.step only takes in 1 obs (1, obs_dim) step instead of (n_steps, obs_dim) when generating the predicted action.

So my 2 questions are: 1) Can you explain a little how the LSTM policy works when it is trained with a sequence of obs but it predicts with only 1 obs 2) It seems that the batch training step is not slid across the sequence? e.g. with data {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} and timesteps of 5, it is trained as batches of {0, 1, 2, 3, 4} and {5, 6, 7, 8, 9} rather than {0, 1, 2, 3, 4} followed by {1, 2, 3, 4, 5}

araffin commented 5 years ago

Hello,

I been trying to trace how the LSTM policy works (with ACER) and its rather confusing.

I think this is a good question and some documentation is needed on that. To be honest, I did not have the time to dive into the obscure mechanics of LSTM in the codebase, but I would recommend to rather look at PPO2 or A2C, because the code of ACER is very hard to read.

And please tell us your finding, that would be valuable for the community ;)

Related: #158

ernestum commented 5 years ago

I only ever looked ad PPO2 too. I will try to get back to you when I have some more time in a few days!

araffin commented 5 years ago

Also related: https://github.com/openai/baselines/pull/859

andris955 commented 4 years ago

Hello,

Is there any update on this? I have the same questions as @Caisho. The way that LSTM policy is used doesn't make sense for me.

Miffyli commented 4 years ago

Admittedly that part of the code could be clearer, but this is how I have understood it:

No unrolling/backprop-through-time is used here. Each step is handled separately, where the hidden state is just one of the inputs. This will make learning harder but also makes the implementation easier, as we can treat hidden states just as one of the inputs. The "right way" of doing recurrent policies with RL agents is still an on-going research (see e.g. R2D2). For prediction we just feed in observations and hidden states from previous calls.

Note that this is based on the observation that states are stored as numpy arrays during training, they are fed alongside observations and are not updated during training steps.

Late edit: Disregard above. The code seems to run backprop through time over the gathered rollout, i.e. n_steps. The previous known hidden state is used as the initial point. Only these initial states are stored in numpy arrays.

andris955 commented 4 years ago

Thank you @Miffyli

iza88 commented 3 years ago

@Miffyli sorry i didn't get what your response means for questions 1 and 2

Miffyli commented 3 years ago

@iza88 1) Hidden state is stored in a numpy array when predicting for one-step observations (same is done inside network during training, except it all is in the TF graph) 2) Training is done in batches of (num_envs, n_steps), parallelizing over the number of environments ("batch size") and backpropagating through time on the second axis.

iza88 commented 3 years ago

@Miffyli suppose num_envs=4 does it mean on rollout we get 4 points to train on regardless n_steps? e.g. reward function: (n_steps, features_count) -> reward

do we skip all rewards (do not train on them) except the last one as we collect these steps?

Miffyli commented 3 years ago

If num_envs=4, then the batch-size will be 4. Then in total there will be num_envs * n_steps points for training. I do not believe I understand the second question about rewards. No rewards are skipped.

iza88 commented 3 years ago

as far as I know the LSTM model takes (m, n_features) as an input whereas a non-rnn model is ok with shape (n_features,)

if you say we get num_envs * n_steps points for training that means LSTM if fed with (n_features,) which confuses me

Miffyli commented 3 years ago

Non-RNN models take all samples from all environments, bundle them together and trains a batch of (num_envs * n_steps, n_features). RNN models keeps the data in (num_envs, n_steps, n_features) format so that the RNN layer can process data over time (the second dimension).