Closed cantonioupao closed 5 years ago
Hi! I met the same issues, but even i train it with a batch size of 5, it reported "runtimerror: cuda error, out of memory". I tried itwith TITIAN Xp 12GB. Could u give some ideas to me ?Thanks
For higher batch size, more GPU memory is required. This issue is closed
The only extra piece of code I added is Data Parallel, to use both of my GPUs. So I am training for a larger batch size of 50 on my 2 GPUs (GeForce GTX TITAN X -12GB RAM) . It runs the first 10-15 epochs but then it spits out a MemoryError . When I tried to train it with a batch size of 200 it gave me an error from the beginning. I know that if i reduce the batch size it will solve the error, however can someone provide an alternative solution?