Closed hohoCode closed 6 years ago
Thanks for the kind words.
Well first this is not G-A3C, but this my A3G setup which done in a different way in which I explain more in the readme.
The batch size here is the number of steps before update which set to 20. The 1 in the model is just just referring to number of frames used to infer each action. And as this is not a supervised model you need to make an action to find out the next observation is to then use in model to infer following action
Hope that helps
Thanks. By 'Batch', I mean one one model is running Batch_Size instances all at once, the codes presented here did not do so, but run one instance one at a time (and for 20 times), this is not what I mean by 'Batch processing', since true batch processing can help improve the performance a lot. Just wondering on this.
Again, great codes.
Again this is not GA3C, where a global model accumulates gradients and updates in a batch. This is an asynchronous model using gpu where model shared by all workers and updated in hog wild training style. What you are refering to is the GA3C version which is different and much slower. Model I have provided trains around 10 times faster than that version. The other version accumulates large batches cause it’s hindered by lock requirements which I get around and updates shared model without locks
Got what you mean. Just had some comments but not a big deal. The codes here are awesome. Thanks.
Thanks for the implementation! Great codes.
I read/ran your codes and realized it is processing training examples with just a batch_size=1 (instead of large batch size, am I correct on this?). I am just wondering if this is designed on purpose due to your G-A3C. With larger batch size things are running faster with GPU, so why batch_size=1?
Is there anything we can do to run it on large batches?
Thank you.