aedocw / epub2tts

Turn an epub or text file into an audiobook
Apache License 2.0
445 stars 44 forks source link

after recent murge, unable to use gpu for xtts #116

Closed danielw97 closed 7 months ago

danielw97 commented 7 months ago

Hi again, i'm currently using a 4 GB graphics card, and find that after the most recent murge I'm unable to use xtts v2 with my gpu as before. This very much may be by design particularly for folks who don't have a gpu capable of running the model, although I see on the coqui discord that the model is capable of running on cards with 4 GB and it has performed fine on mine. I see that currently torch.cuda.get_device_properties(0).total_memory > 7500000000 is defined, and am just wondering if this could please be lowered to 4000000000 or similar, as if my math is right currently the minimum listed is 7.5 GB approximately. Thanks, and do of course let me know if my thinking is wrong here.

aedocw commented 7 months ago

Your thinking seems reasonable to me, I wasn't sure about that limit from the PR @wonka929 did yesterday. I'll let them weigh in first, but I'm not sure if there's a good reason to block GPU usage based on memory limits. For instance the GPU might use vram that would not show up in the total_memory (I believe), so you would still be able to use GPU even if you had less ram available, it would just be slower (but still much faster than CPU).

wonka929 commented 7 months ago

@danielw97

Yes you can modify it to, like 3.5GB in case. I have a PC with just 2 GB and it's not capable of running it. I choosed based on the ram used while working in CPU mode (more or less 4 GB) so i selected the next graphic card dimension (8GB). BTW for me it can be changed without problem, just i'm not sure which is the lower limit.

If you can run the script with CUDA active and tell us which is your GPU VRAM usage we can tune the value from 7.5 GB to a reasonable amount.

danielw97 commented 7 months ago

At least for me in most cases, the vram usage hovers around 3.2 GB although I'm not sure if anything is being offloaded to ram. Hope this is somewhat useful.

aedocw commented 7 months ago

This was resolved with https://github.com/aedocw/epub2tts/pull/118