dgriff777 / rl_a3c_pytorch

A3C LSTM Atari with Pytorch plus A3G design
Apache License 2.0
563 stars 119 forks source link

Normalize #23

Closed yhcao6 closed 6 years ago

yhcao6 commented 6 years ago

In the NormalizedEnv, I am puzzled why you choose alpha equal to 0.9999, if I want unbiased_mean, should the alpha be (num_step - 1)/num_step?

I am also puzzled why normalize environment observation has a such huge influence on the performance? can you explain to me? Thanks!

dgriff777 commented 6 years ago

This is just standard batch normalization. Normalization of data is a standard neural network procedure to help reduce variance in training as well as improve speed of training. It will still perform if trained without but slightly slower and much more variance.

yhcao6 commented 6 years ago

Oh. I am going to learn about BN. I have another doubt that in the fireEnv. Every time reset environment, not real done, firstly execute action 1, that is fire, that execute action 2, that is right. And in the code always check if is is done

My doubt is that why after fire action always execute right action? And why do we need to check if it is done? Is it possible only execute 2 actions the game is done...

dgriff777 commented 6 years ago

I just matched baselines wrapper for comparison. Gonna go back to my old more simpler env wrapper. The whole fire reset stuff is not really needed. All my openai top scores never used fire reset. A good agent should learn to hit the fire button itself personally I think

yhcao6 commented 6 years ago

Yeah, I also think so. Thank you!

yhcao6 commented 6 years ago

Could I ask if I don't use shared-optimizer, does it will cause a big influence on performance?

dgriff777 commented 6 years ago

It slows down training a little (Though last time I trained with unshared was nearly a year ago so not certain on actually degree of influence). It also increases the variances of performance while training. I would suggest to use the shared optimizer as it is superior unless you have motive not too.

yhcao6 commented 6 years ago

Hi,

screen shot 2018-02-25 at 8 34 21 am

In train.py, I saw you using generalized advantage estimation, while by my calculation, in the default model the tau is 1, so gae is equivalent to advantage function. So can I replace the Variable(gae) with advantage? Is my thought correct?

Thanks!

dgriff777 commented 6 years ago

I usually set the tau which is the lambda in the generalized advantage estimation to 0.92 and see favorable results with that setting. It allows my agent variance in exploration and easier time to find sparse rewards.

And no don't replace variables that mess up gradient assignment. I would leave that part as is but on other hand you should also do your own exploration and have fun configuring at your fancy. đź‘Ť

yhcao6 commented 6 years ago

I am not sure if you mean although the gae and advantage has same value if tau is 1. But they are different variable since they belongs two different loss, one is policy loss and another is value loss. So I should construct different Variable.

Could I ask more what if they use the same variable, that is, replace gae directly with advantage, then in the back propagation, same variable take place in two loss function although they are added up to one scalar and backward.

Thanks!

dgriff777 commented 6 years ago

Sorry i missed understood you. Yes if tau=1 you can think of as the “advantage estimate” but you can’t do the “advantage function” as we don’t calculate Q values in A3C