dgriff777 / rl_a3c_pytorch

A3C LSTM Atari with Pytorch plus A3G design
Apache License 2.0
563 stars 119 forks source link

Question on training function #19

Closed yiwan-rl closed 6 years ago

yiwan-rl commented 6 years ago

I noticed that in your player_util.py action_train function:

if self.done:
    if self.gpu_id >= 0:
        with torch.cuda.device(self.gpu_id):
            self.cx = Variable(torch.zeros(1, 512).cuda())
            self.hx = Variable(torch.zeros(1, 512).cuda())
    else:
        self.cx = Variable(torch.zeros(1, 512))
        self.hx = Variable(torch.zeros(1, 512))
else:
    self.cx = Variable(self.cx.data)
    self.hx = Variable(self.hx.data)

But how can you backpropagate gradients through time, to the past 20 steps, if you set:

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)
dgriff777 commented 6 years ago

Hi! So this is a stateful lstm implementation so the cell state is kept and sent forward through time. So the cell state at step 20 is input for the lstmcell at step 21. What is done here:

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)

The hx, cx output of lstmcell the Variables are volatile and cannot be bppt so we create new Variables for the underlying data in hx, cx Variables and now they can be ready to bppt for next update.

yiwan-rl commented 6 years ago

Hi, thanks for your replay. Your action_train function is executed for every training step. And self.done is always False until the env resets. So you are actually setting

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)

nearly every time step. Now self.cx and self.hx are new Variables and gradients will not be passed through.

If you check the project you reference, https://github.com/ikostrikov/pytorch-a3c, it doesn't have such problem because it sets

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)

every args.num_steps, instead of every step.

dgriff777 commented 6 years ago

hmm your right it looks like I changed something here. I'll take a look in a little bit but very busy at the moment

yiwan-rl commented 6 years ago

Oh, I don't think it's the problem of GPU/CPU.

self.cx = Variable(self.cx.data)
self.hx = Variable(self.hx.data)

is ok for both GPU and CPU.

The problem is you don't want to put these two lines in the "else" condition. This will make this two lines execute every time step, except episode terminates (self.done = True).

What you want to do is to execute these 2 lines every args.num_steps (in your setting, args.num_steps = 20).

dgriff777 commented 6 years ago

its fixed now should be fine now thanks!

Wow thanks for spotting had not noticed this error in repo. My version is not linked to GitHub and just been checking using trained models. And test part was fine lol. Good spot! For clarity all final performance of models posted were not trained with this bug in code. Thanks again!