jerrodparker20 / adaptive-transformers-in-rl

Adaptive Attention Span for Reinforcement Learning
130 stars 14 forks source link

Is this algorithm suitable for off-policy policy? #15

Open dbsxdbsx opened 3 years ago

dbsxdbsx commented 3 years ago

I just finished reading your paper, and I notice that it is an on policy method.
And I wondering if anyone has tested it with an rl method that has a replay_buff pool.
As far as I know, for off-policy method with RNN structure(like lstm, gru or attention or transformer...), if hidden state is stored with a sample (s,a,r,s'), the hidden state would become a stale data after a long training--- Is this issue conqured with adaptive-transformer?

shaktikshri commented 3 years ago

The algorithm we use is IMPALA which uses Vtrace targets– this is an instance of off-policy learning. As far as adaptive transformer is concerned, it just makes sure that the attention context length you use is not fixed but rather learnt over the training sequence. Now the transformer-xl used in our experiments takes care of the hidden states for the previous (state, action) pairs, similar to the role of a replay buffer you're pointing out.