Open ArmoredExplorer opened 4 months ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
Wondering what is wrong with it. It seems to happen in the torch package.
Hey @ArmoredExplorer ! I ran into exactly the same issue as you. Also on a machine with 16GB RAM. The solution I found was to split the inital text into super small text files ( max 100 characters...) and then feed them sequentially to the model. It is super slow, but it is the best I could do. You could multiprocesses them in batches too, but you need extra care
Describe the bug
When running text-to-speech on an english model, when tts tries to write the .wav file, it runs out of memory. I'm running on cpu only. My machine has ~14GB available RAM
I ran the code on around 20 pages of text, everything worked before tts.tts_to_file, but then it threw runtimeError bad allocation. During inference the model was successfully swapping chunks in and out of memory but when trying to write the file, it looks like it ran out of memory.
It works fine on a few paragraphs.
To Reproduce
Expected behavior
Writing the .wav file successfully
Logs
Environment
Additional context
This is happening on 16GB of RAM so if you have more ram when testing it might not happen. Limit in a VM should be able to do it.