alexis-jacq / Pytorch-DPPO

Pytorch implementation of Distributed Proximal Policy Optimization: https://arxiv.org/abs/1707.02286
MIT License
180 stars 40 forks source link

average gradients to update global theta? #7

Open weicheng113 opened 5 years ago

weicheng113 commented 5 years ago

Thanks for the nice implementation in pytorch, which made easier for me to learn.

Regarding chief.py implementation, I got a question about updates to global weights. From Algorithm Pseudocode in the paper, it seems to use averaged gradients from workers to update the global weights, but chief.py looks using sum of worker's gradients? Thanks.

Cheng

yusukeurakami commented 5 years ago

@weicheng113 I think same as you think. In my code, I am adding the following division.

p._grad = shared_grad_buffers.grads[n+'_grad']/params.num_processes
weicheng113 commented 5 years ago

@yusukeurakami Thanks for the reply. Do you mean you are going to add the averaging in this line - https://github.com/alexis-jacq/Pytorch-DPPO/blob/ec9303419078689d38a0eb89f220f69e3e105897/chief.py#L16

Or you have already added somewhere, which I did not find it. Thanks.

@yusukeurakami Sorry, I thought you were the author of the code. :) By the way, is the training working fine after you apply the division?

yusukeurakami commented 5 years ago

@weicheng113 No problem. I replied to you because I was stacked at the same place. I don't have enough data points to compare the result yet, and I have to. I will update my result when I got it.

weicheng113 commented 5 years ago

@yusukeurakami Thanks a lot.

yusukeurakami commented 5 years ago

@weicheng113 I've run my training with 7 workers in total. So, with average, gradients will be divided by 7 every update. however, from the result, both with average and non-average converged in the same values in almost same update steps. I don't really understand why it behaves same even the parameters were updated 7times smaller...

weicheng113 commented 5 years ago

@yusukeurakami Thanks for sharing good findings. I don't understand also. From gut feeling, the average will make update more steady with smaller steps. Could it be the env you are trying to solve is simple so that It cannot tell?

yusukeurakami commented 5 years ago

@weicheng113 I am running a robot arm with 7 joints in continuous action and state space (original Mujoco environment). It should be complex enough.

weicheng113 commented 5 years ago

@yusukeurakami Ok, thanks.