Open SarahPeterson2854 opened 1 year ago
I'm quite confident that if there is a memory leak, it is not occurring in Tortoise. I have personally run it on servers for months at a time with no crashes and no memory growth. I'd take a second look at whether or not you are storing any intermediate variables, and potentially dig into your dependencies.
Try using conda, on colab. I had it working nicely a couple weeks back, hopefully it didn't break since then.
Make sure you edit requirements.txt as noted
The colab session will "crash" after installing conda. This is just it restarting, continue on with the steps afterwards.
https://gist.github.com/n8bot/8e98ff216c1363a1222d990963458108
I'm happy to review a PR that makes these changes.
Try using conda, on colab. I had it working nicely a couple weeks back, hopefully it didn't break since then.
Make sure you edit requirements.txt as noted
The colab session will "crash" after installing conda. This is just it restarting, continue on with the steps afterwards.
https://gist.github.com/n8bot/8e98ff216c1363a1222d990963458108
*adjusting the requirements or not.
Running this, rerunning after crash throws:
ModuleNotFoundError: No module named 'einops'
ModuleNotFoundError Traceback (most recent call last)
3 frames
/content/tortoise-tts/tortoise/models/xtransformers.py in
ModuleNotFoundError: No module named 'einops'
If that colab doesn't work anymore, I have no idea. I abandoned that process and now just do local inferencing.
Here are my latest steps which rely purely on (mini)conda:
https://gist.github.com/n8bot/96f5a7c5a9493909113280cfa9732506
was facing the same problem. After execution the utilized memory still persists (~11GB). Deleting the generated sample or tts or any other variable doesn't clear it.
Workaround was using multiprocessing in python to execute tortoise, all utilized memory is fully cleared out, ready for the next.
was facing the same problem. After execution the utilized memory still persists (~11GB). Deleting the generated sample or tts or any other variable doesn't clear it.
Workaround was using multiprocessing in python to execute tortoise, all utilized memory is fully cleared out, ready for the next.
I think im running into this issue. Can i ask for details or snippets to how you used multiprocessing? Thanks in advance
Can run and generate short 15char audio with some patches to the broken imports, !pip3 install --force einops==0.4.1 etc
After one generation ram usage on colab is at 11.7gb
Deleting the generated sample or tts or any other variable doesn't reduce it.
Running another text generation may or may not work
crashing consistently at 12.7gb (max in colab) after 4 generations of short snippets on ultrafast everything else default from instructions.
garbage collecting and deleting variables does not help during subsequent iterations.
5 10s samples loaded for voice.
Running : gen = tts.tts_with_preset(text, voice_samples=voice_samples, conditioning_latents=conditioning_latents, preset=preset)
3/4 times in a row with a short 10 char string for text, ultrafast, 5 10s wav voice samples/latents is adding 200-800mb ram each time. The gen is generated and the cell completes for the first 3 times. The ram eventually explodes past the limit in colab.