miyosuda / async_deep_reinforce

Asynchronous Methods for Deep Reinforcement Learning
Apache License 2.0
592 stars 192 forks source link

Any reason for choosing ACTION_SIZE = 3? Extension for continuous action? #24

Closed wsjeon closed 7 years ago

wsjeon commented 7 years ago

I cannot figure out why you chooses ACTION_SIZE = 3.

In your branch 'gym', I find that someone changes it to 6.

As I remember, in DQN paper (Nature), ACTION_SIZE is greater than 10, and I think for some cases, it can affect the performance.

Any reason?

Also, do you have plan to extend your work for continuous action domain?

p.s.

I think your code is awesome! :)

miyosuda commented 7 years ago

In old ALE version which I forked,

getMinimalActionSet()

function returns 3 actions.

Recent ALE version and OpenAI Gym's self.env.action_space returns 6 actions. This is why I set these parameters in constants.py.

Please refer discussion around here https://github.com/miyosuda/async_deep_reinforce/issues/1#issuecomment-216037766

In ALE, there are 18 action set, but I'm using minimal actions which are actually used in the game.

Also, do you have plan to extend your work for continuous action domain?

I have no plan to implement it now. Sorry.

wsjeon commented 7 years ago

Thank you for your comments! :)