Closed danielw97 closed 9 months ago
I pushed up a little change to a branch, can you try this: git fetch origin git checkout 83-xtts-vram then try running directly (python epub2tts.py book.epub etc)
Let me know if that fixes it - I should have taken that into consideration the first time around, I was being pretty sloppy with that.
Thanks for your speedy reply, that seems to have fixed it on my end.
Excellent! Thanks for confirming so quickly, I'll merge this now.
Hi there, After murging pull request #80, I notice that I am now running out of vram on my modest 4 GB graphics card. For the moment I've solved the problem on my end by commenting tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(self.device) I'm just getting started in the AI space, although I assume that the model is effectively getting loaded into vram twice and not unloaded the first time. I'm not sure if there is a way to easily solve this, although I just thought I would let you know. Thanks for all of your work on this.