Closed patelrajnath closed 8 years ago
The warning on this front page tell that we don't support this project anymore. We develop another one instead.
Mostly, there can be many reason for the high gpu memory... I don't know groundhog, but if it put the dataset in the GPU memory, this could be one cause.
Your GPU memory is not enough. Try a better GPU with more memory.
The GPU he use is already a 12G GPU. Hard to find better, but recently NVIDIA released 24G GPU.
See this for some trade off between speed and memory in Theano:
http://deeplearning.net/software/theano/faq.html?highlight=memory#theano-memory-speed-trade-off
Otherwise, use smaller batch size, smaller layer or less layer.
As this repo is deprecated and not supported, I'm closing this issue.
Thank you! I just reduced the batch size, problem solved. :)
On Mon, Apr 11, 2016 at 7:49 PM, Frédéric Bastien notifications@github.com wrote:
The GPU he use is already a 12G GPU. Hard to find better, but recently NVIDIA released 24G GPU.
See this for some trade off between speed and memory in Theano:
http://deeplearning.net/software/theano/faq.html?highlight=memory#theano-memory-speed-trade-off
Otherwise, use smaller batch size, smaller layer or less layer.
As this repo is deprecated and not supported, I'm closing this issue.
— You are receiving this because you authored the thread. Reply to this email directly or view it on GitHub https://github.com/lisa-groundhog/GroundHog/issues/40#issuecomment-208369600
Regards: राज नाथ पटेल/Raj Nath Patel KBCS dept. CDAC Mumbai. http://kbcs.in/
Hello, I am trying to train nmt but getting following error- "Error allocating 240008000 bytes of device memory (out of memory). Driver report 95178752 bytes free and 12079136768 bytes total"
I have changed the batch size and mini batch size, 10 and 5 respectively as suggested in a post on the same issue. kindly suggest, what may be the cause of this much memory utilization.
Thank you.