ikostrikov / pytorch-a3c

PyTorch implementation of Asynchronous Advantage Actor Critic (A3C) from "Asynchronous Methods for Deep Reinforcement Learning".
MIT License
1.22k stars 280 forks source link

how to under ensure ensure_shared_grads? #55

Open luochao1024 opened 6 years ago

luochao1024 commented 6 years ago

I am kind of confused of the ensure_shared_grads here https://github.com/ikostrikov/pytorch-a3c/blob/master/train.py#L13. Here, the grad is synced only when it is None. I think we need to set shared_param._grad = param.grad all the time because I don't see we sync the grad anywhere except here. Would anyone give me some hints about it?

TolgaOk commented 5 years ago

When you shared the model parameters grad attribute is not shared, so each process needs to have their own grad attribute and that's why we only need to assign it once for every process.