Farama-Foundation / ViZDoom

Reinforcement Learning environments based on the 1993 game Doom :godmode:
https://vizdoom.farama.org/
1.7k stars 396 forks source link

GPU for Tensorflow_learning_Example #305

Closed gian1312 closed 6 years ago

gian1312 commented 6 years ago

Hi I have a question regarding the support of tensorflow-GPU. Generally, I have observed, that the Vizdoom examples use just quite a few ressources. Is there a way to run it faster and to use the GPU? If I run the tensorflow (for example mnist_deep.py) examples, I can see a significant improvement by the GPU. Running the learning_tensorflow.py I can't see an improvement using the GPU (just the memory gets filled) and CPU and GPU stay on low level. It takes quite a while though. Did I miss some important points?

Miffyli commented 6 years ago

This is to be expected, especially with the example code.

The example code has rather small network, small input and large frame skip size. I.e. there is not too much to compute per game frame. Increasing the amount of computations (network size, input size, smaller skip size) would "make better use" of your hardware, but when it comes to (deep) reinforcement learning, you usually have to use some level of parallelism to utilize your hardware well, e.g. by running multiple training agents like in A3C. You can see examples of such codes here: https://github.com/mihahauke/deep_rl_vizdoom

One more point I'd like to bring out: Are you running on Windows or Linux? ViZDoom runs several times faster on Linux (In terms of FPS. https://github.com/mwydmuch/ViZDoom/issues/195 ).

mihahauke commented 6 years ago

Just like @Miffyli wrote, nothing to add.

Thank you Anssi for answering our issues. You should get a medal or something.

gian1312 commented 6 years ago

Thanks a lot! This explains a lot. Especially the discrepancy between expensive windows desktop and "cheap" linux laptop. Best Regards