Closed Likelihoood closed 1 year ago
Hi,
Memory Leak when TTS
How does the leak manifest itself? Does the RAM consumption grow up to a certain limit, or does it continue to grow?
torch 2.0.1
Does using older PyTorch versions make any difference, e.g. 1.10 or 1.12?
pip
Does using the repo with torch.hub
directly make any difference?
Thanks for your response.
I am going to try the lower version for pytorch to see the difference.
It is not fix the memory leak while using 1.10
hub loading still cannot fix memory leak
yes, I deploy the tts model on k8s, memory consumption of the pod grow up quickly and it will restart due to OOM
Does the leak happen without kubernetes? How much ram is allocated per pod?
🐛 Bug
Memory Leak when TTS
To Reproduce
Inference in cpu : imp = package.PackageImporter(model_path) model = imp.load_pickle("tts_models", "model") model.model.eval() model.apply_tts(text=text, speaker=speaker, sample_rate=sample_rate, put_accent=False, put_yo=False)
Expected behavior
No Memory leak
Environment
Please copy and paste the output from this environment collection script (or fill out the checklist below manually).
You can get the script and run it with:
conda
,pip
, source): pipAdditional context