erew123 / alltalk_tts

AllTalk is based on the Coqui TTS engine, similar to the Coqui_tts extension for Text generation webUI, however supports a variety of advanced features, such as a settings page, low VRAM support, DeepSpeed, narrator, model finetuning, custom models, wav file maintenance. It can also be used with 3rd Party software via JSON calls.
GNU Affero General Public License v3.0
816 stars 91 forks source link

Add torch in PATH for docker environment #161

Closed Fgabz closed 4 months ago

Fgabz commented 4 months ago

Just a quick note for people like me using docker environment (typically when you use service such as runpod/jarvislabs)

While trying to finetune the model, I encountered this error:

Could not load library libcudnn_ops_infer.so.8

To fix it, you need to add torch to the PATH like this:

in my case export LD_LIBRARY_PATH=/home/text-generation-webui/installer_files/env/lib/python3.11/site-packages/torch/lib:$LD_LIBRARY_PATH

and the finetuning in step 1 should work!

ref => https://github.com/SYSTRAN/faster-whisper/issues/516

erew123 commented 4 months ago

Hi @Fgabz

Thanks for the info! I see you are running this with text-generation-webui, so not Standalone. I assume that the instruction listed here:

https://github.com/erew123/alltalk_tts?tab=readme-ov-file#-starting-fine-tuning

image

Didn't work in a text-gen environment? Or was this in addition to?

Thanks

Fgabz commented 4 months ago

Yes I'm using it with text-web-ui and it's IN ADDITION of what's already in the documentation you've made

erew123 commented 4 months ago

@Fgabz Great! Thanks for letting me know. Ill look to add this to the instructions at some point.

Ill close the ticket for now.

Thanks