Closed blob42 closed 9 months ago
I think the reason is me using nvidia based build. the common-env/transformers-nvidia.yml
does not contain TTS as a requirement
try to build the backend first with this comand make clean
and BUILD_GRPC_FOR_BACKEND_LLAMA=ON make GO_TAGS=stablediffusion,tts build
or make GO_TAGS=stablediffusion,tts,tinydream build
for all the bakends before to run docker, it would be more useful if you could give more details about your worklfow: .env, go and python version, docker-compose.yaml, dockerfile and your ssh comand. after when you have the GO_TAGS rebuild the image simply run docker-compose up --build
or docker-compose up -d
you ned to setup REBUILD=true
and GO_TAGS= tts
in your .env file in order to use tts
.
I can confirm the grpc backend works by adding the TTS dependency to the nvidia requirements file
LocalAI version:
Environment, CPU architecture, OS, and Version: docker 24.0.7 on ArchLinux - Linux 6.6.6-zen1-1-zen
Describe the bug I get the following error when I try to run coqui TTS using the example from documentation I see a grpc connection error. I am using docker so manually running the
local-ai tts ...
command from the container shows the following detailed error:with DEBUG=true
To Reproduce Just pulled or build the v2.8.2 image. For info the inference with cuda works.
Expected behavior working coqui tts endpoint