descriptinc / melgan-neurips

GAN-based Mel-Spectrogram Inversion Network for Text-to-Speech Synthesis
MIT License
980 stars 214 forks source link

In Windows, even when I pass nothing ("") to set_env.sh, I am getting GPU out of memory! #6

Open khorshidisamira opened 5 years ago

khorshidisamira commented 5 years ago

On windows, I run set_env.sh to use CPU, but when I run train script, I am getting following error:

Traceback (most recent call last):

File "scripts/train.py", line 234, in main() File "scripts/train.py", line 156, in main loss_D.backward() File "C:\Users\Samira\Anaconda3\lib\site-packages\torch\tensor.py", line 166, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "C:\Users\Samira\Anaconda3\lib\site-packages\torch\autograd__init__.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 2.00 GiB total capacity; 1.23 GiB already allocated; 19.62 MiB free; 31.60 MiB cached)

How can I train using CPU instead of GPU?

ritheshkumar95 commented 5 years ago

So the issue is that the code is written in a way that doesn't support training using CPU (my bad). You could convert all the .cuda() statements in the code to .to(device) where device is an argument. I could do this myself but i don't know when i'd find the time to do this.