Kaixhin / ACER

Actor-critic with experience replay
MIT License
251 stars 46 forks source link

batch_size for off-policy learning #4

Closed jingweiz closed 7 years ago

jingweiz commented 7 years ago

Hey, in the paper for each off-policy learning only one trajectory is sampled, while here you use 16. For the low-level input this won't be too much slower but for higher dimensions this might be an issue? what do you think?

Kaixhin commented 7 years ago

The increase in compute time based on batch size won't be linear because of the ability to do parallel computation (same is true for GPUs and CPUs), so this shouldn't be an issue unless you use a very large batch size. If you consider the DQN, the original paper used a batch size of 32 - consider it the same situation.

Note that other people have worked on batch versions of A3C (see Training Agent for First-Person Shooter Game with Actor-Critic Curriculum Learning), so it is also possible to do the on-policy step with batches too.

jingweiz commented 7 years ago

Hey, thanks for the reply. But for DQN it is running on GPU so large batch_size wouldn't be an issue. For the original A3C, it wouldn't get too much speed-ups were it run on GPU since for each forward pass it only passes one sample, while if a larger batch is fed in, I suspect that it wouldn't be worthy for the authors to claim that it can be run on purely CPUs, especially when your input is high-dimensional. And for the paper you refered to, I roughly skipped over it, but I think it's running on GPU? Would be gla to hear about your insights :)

Kaixhin commented 7 years ago

It is true that Batch-A3C (what I referenced) and GA3C are batch versions of A3C that run training on GPU, and I would indeed expect the benefits of transferring to GPU to increase as batch size increases. However, if the network is smaller or the batch size is smaller (not necessarily 1), the time to transfer data to and from the GPU can become the bottleneck, and the benefit from GPU computation is diminished. Does the variance reduction from using a batch help at this point? Who knows.

jingweiz commented 7 years ago

Thanks a lot for the reply :)