Open Caisho opened 5 years ago
Hello,
I been trying to trace how the LSTM policy works (with ACER) and its rather confusing.
I think this is a good question and some documentation is needed on that. To be honest, I did not have the time to dive into the obscure mechanics of LSTM in the codebase, but I would recommend to rather look at PPO2 or A2C, because the code of ACER is very hard to read.
And please tell us your finding, that would be valuable for the community ;)
Related: #158
I only ever looked ad PPO2 too. I will try to get back to you when I have some more time in a few days!
Also related: https://github.com/openai/baselines/pull/859
Hello,
Is there any update on this? I have the same questions as @Caisho. The way that LSTM policy is used doesn't make sense for me.
Admittedly that part of the code could be clearer, but this is how I have understood it:
No unrolling/backprop-through-time is used here. Each step is handled separately, where the hidden state is just one of the inputs. This will make learning harder but also makes the implementation easier, as we can treat hidden states just as one of the inputs. The "right way" of doing recurrent policies with RL agents is still an on-going research (see e.g. R2D2). For prediction we just feed in observations and hidden states from previous calls.
Note that this is based on the observation that states
are stored as numpy arrays during training, they are fed alongside observations and are not updated during training steps.
Late edit: Disregard above. The code seems to run backprop through time over the gathered rollout, i.e. n_steps
. The previous known hidden state is used as the initial point. Only these initial states are stored in numpy arrays.
Thank you @Miffyli
@Miffyli sorry i didn't get what your response means for questions 1 and 2
@iza88
1) Hidden state is stored in a numpy array when predicting for one-step observations (same is done inside network during training, except it all is in the TF graph)
2) Training is done in batches of (num_envs, n_steps)
, parallelizing over the number of environments ("batch size") and backpropagating through time on the second axis.
@Miffyli
suppose num_envs=4
does it mean on rollout we get 4 points to train on regardless n_steps
?
e.g. reward function: (n_steps, features_count) -> reward
do we skip all rewards (do not train on them) except the last one as we collect these steps?
If num_envs=4
, then the batch-size will be 4
. Then in total there will be num_envs * n_steps
points for training. I do not believe I understand the second question about rewards. No rewards are skipped.
as far as I know the LSTM model takes (m, n_features)
as an input whereas a non-rnn model is ok with shape (n_features,)
if you say we get num_envs * n_steps
points for training that means LSTM if fed with (n_features,)
which confuses me
Non-RNN models take all samples from all environments, bundle them together and trains a batch of (num_envs * n_steps, n_features)
. RNN models keeps the data in (num_envs, n_steps, n_features)
format so that the RNN layer can process data over time (the second dimension).
Dear @erniejunior,
I been trying to trace how the LSTM policy works (with ACER) and its rather confusing. My understanding that the n_steps = lstm sequence length, and so each batch (n_env * n_steps) is fed into the LSTM policy for train_step. However in _Runner.run the self.model.step only takes in 1 obs (1, obs_dim) step instead of (n_steps, obs_dim) when generating the predicted action.
So my 2 questions are: 1) Can you explain a little how the LSTM policy works when it is trained with a sequence of obs but it predicts with only 1 obs 2) It seems that the batch training step is not slid across the sequence? e.g. with data {0, 1, 2, 3, 4, 5, 6, 7, 8, 9} and timesteps of 5, it is trained as batches of {0, 1, 2, 3, 4} and {5, 6, 7, 8, 9} rather than {0, 1, 2, 3, 4} followed by {1, 2, 3, 4, 5}