Closed yiwan-rl closed 6 years ago
Hi! So this is a stateful lstm implementation so the cell state is kept and sent forward through time. So the cell state at step 20 is input for the lstmcell at step 21. What is done here:
self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)
The hx, cx output of lstmcell the Variables are volatile and cannot be bppt so we create new Variables for the underlying data in hx, cx Variables and now they can be ready to bppt for next update.
Hi, thanks for your replay. Your action_train function is executed for every training step. And self.done is always False until the env resets. So you are actually setting
self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)
nearly every time step. Now self.cx and self.hx are new Variables and gradients will not be passed through.
If you check the project you reference, https://github.com/ikostrikov/pytorch-a3c, it doesn't have such problem because it sets
self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)
every args.num_steps, instead of every step.
hmm your right it looks like I changed something here. I'll take a look in a little bit but very busy at the moment
Oh, I don't think it's the problem of GPU/CPU.
self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)
is ok for both GPU and CPU.
The problem is you don't want to put these two lines in the "else" condition. This will make this two lines execute every time step, except episode terminates (self.done = True).
What you want to do is to execute these 2 lines every args.num_steps (in your setting, args.num_steps = 20).
its fixed now should be fine now thanks!
Wow thanks for spotting had not noticed this error in repo. My version is not linked to GitHub and just been checking using trained models. And test part was fine lol. Good spot! For clarity all final performance of models posted were not trained with this bug in code. Thanks again!
I noticed that in your player_util.py action_train function:
But how can you backpropagate gradients through time, to the past 20 steps, if you set: