mjdietzx / SimGAN

Implementation of Apple's Learning from Simulated and Unsupervised Images through Adversarial Training
MIT License
411 stars 99 forks source link

GPU utilization #8

Open alex-mocanu opened 7 years ago

alex-mocanu commented 7 years ago

Hi,

Even though the net takes almost all of the GPU memory, the "Volatile GPU util" remains at 0. Therefore, I conclude that the net doesn't really run on GPU (the very slow training process also suggests this). I am running the SimGAN with Tensorflow 1.3 and Keras 2.0.6 (I have changed the Convolution2D to Conv2D, even though the original behaved the same way). Could you tell me the running configuration that you have used for testing (Tensorflow and Keras versions)?

Thank you!

mjdietzx commented 7 years ago

From keras docs If you are running on the TensorFlow or CNTK backends, your code will automatically run on GPU if any available GPU is detected. So sounds like maybe could be a problem with your setup? I've been using pytorch lately so not up to date with keras/tensorflow anymore and maybe it is a problem with the code. Have you successfully ran other keras/tensorflow models on your setup?

alex-mocanu commented 7 years ago

I've run this DCGAN implementation: https://github.com/jacobgil/keras-dcgan, after I've seen that the SimGAN was not behaving well, to check if there were any problems with my configuration. The DCGAN ran well, taking up to about 75% of the "Volatile GPU util".

GabrielLin commented 7 years ago

I have the same situation with Tensorflow 1.3.0 and Keras 2.0.5. @alex-mocanu , @mjdietzx , could you please tell me how long you trained the model? Thanks.