Open BMillikan73 opened 5 years ago
Seems like you run out of GPU memory.
But, the same model runs on the same GPU with the TensorFlow backend
Sent from my iPhone
On Dec 13, 2018, at 10:50 AM, delzac notifications@github.com wrote:
Seems like you run out of GPU memory.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or mute the thread.
The specific implementation would be different between different frameworks, so its not unexpected that VRAM usage would also be different. Also, since you are working on FCN, it tends to be very memory intensive.
would suggest that you decrease your minibatch size/filter num/num layers so that your ram doesn't run out.
I have a Keras model of the FCN-8s network and can train the network fine with the TensorFlow backend. However when I run this same script with the CNTK backend, I get:
RuntimeError: CUDA failure 2: out of memory ; GPU=0 ; hostname=MSI ; expr=cudaMalloc((void*) &deviceBufferPtr, sizeof(AllocatedElemType) AsMultipleOf(numElements, 2))
Why would this be?