Closed nerkdesign closed 1 month ago
Hi @nerkdesign
By latest, do you mean the BETA? And by model you mean TTS Engine?
Let me know
Thanks
Hello,
Sorry, I was taking about the last stable version (not BETA) About the model, it's a mistake, I was talking about the prompt sizing (for long text it fill quick the memory )
Thansk for your feedback and your help
Ok, I've not come across this issue at all and the code for version 1.9 has been stable for quite a long time now. What you are describing (memory filling up) suggests multiple versions of the XTTS model being loaded in (is all I can think of).
I can only try and replicate the problem here and see what happens.
Would you be able to get me a diagnostics file for your computer. To do this, open a command prompt and in the alltalk folder, run start_environment
and then, when the python environment has started, please run python diagnostics.py
and select requirements_standalone.txt. If you could then upload the diagnostics.log
file here, that will give me an understanding of your setup and python environment.
After that, can you describe in what way you are generating TTS. Is it through the web page interface http://127.0.0.1:78751, or through the TTS Generator, or through curl/API requests, or Text-generation-webui etc?
Have you any idea how long the text is that you are generating?
Thanks
Hi @nerkdesign
Not heard back from you. If you need more help, let me know. Thanks
Using the last version, standalone install.
Hello,
When I do some TTS conversion, it fill my graphic card memory (12G VRAM) and make it crash after some usage (depending on model used, it can be only 3-4 usage). I have to restart the process each time. Is there a way to free the memory after a usage without restarting the soft ?
Many thanks