Closed maximecb closed 6 years ago
1) In my experience it was just faster then reusing the states. The current option uses batched computations on GPU more efficiently (but there should be no difference when computations are performed on CPU).
2) Recurrent policies would be significantly slower for KFAC since they require to write custom RNN/LSTM kernels (instead of the ones provided for cudnn) and then use specific approximations for them: https://openreview.net/forum?id=HyMTkQZAb¬eId=HkIsQkpSG This approximation wasn't published when I implemented this code.
In my experience it was just faster then reusing the states.
Why is it faster to recompute the states? Presumably, the already computed states could just be left in GPU memory? You can still batch the N steps together in the evaluate_actions
computation, even if you are reusing the already computed states.
Recurrent policies would be significantly slower for KFAC since they require to write custom RNN/LSTM kernels
If I were to write a recurrent policy that uses FC layers instead of an RNN, would it work out of the box with your KFAC optimizer? The optimizer doesn't handle GRU/RNN cells, but can it handle recurrent policies without those?
It just what happened in practice. It can be totally possible to make it faster. But backward pass is more expensive anyway so it will not take this code significantly faster in overall anyway.
I have two questions regarding the implementation of recurrent policies:
Why do you have a loop recomputing states in your recurrent policy. It seems you could use the states you already stored and computed in rollouts? This would get rid of the loop which seems kind of ugly and difficult to follow (took me a while to figure out what was happening): https://github.com/ikostrikov/pytorch-a2c-ppo-acktr/blob/master/model.py#L104
What's missing to have ACKTR and your KFAC optimizer support recurrent policies? This is something I would like to have, because ACKTR seems more resilient than straight A2C.