Closed smaybius closed 3 years ago
can you share your config?
can you share your config?
"model": "ljspeech_test"
is the problem.
It has to be a valid tts model -> 'tacotron', 'tacotron2', 'glow_tts', 'speedy_speech'.
"model": "ljspeech_test"
is the problem. It has to be a valid tts model -> 'tacotron', 'tacotron2', 'glow_tts', 'speedy_speech'.
It fixed it, but RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 2.24 GiB already allocated; 11.70 MiB free; 2.35 GiB reserved in total by PyTorch) This is on a laptop GTX 1650 and I don't want to invest in an eGPU just for this
you should reduce the batch_size tha. Try smaller values.
I'm trying to train a model with my own dataset, and I got this error. The same thing applied when I used the default LJSpeech dataset: https://pastebin.com/WugD8rZt