mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed, P2P inference
https://localai.io
MIT License
25.87k stars 1.94k forks source link

Cannot run coqui tts - Error: grpc process not found (image and local docker build) #1727

Closed blob42 closed 9 months ago

blob42 commented 9 months ago

LocalAI version:

Environment, CPU architecture, OS, and Version: docker 24.0.7 on ArchLinux - Linux 6.6.6-zen1-1-zen

Describe the bug I get the following error when I try to run coqui TTS using the example from documentation I see a grpc connection error. I am using docker so manually running the local-ai tts ... command from the container shows the following detailed error:

11:56PM INF Loading model 'tts_models/en/ljspeech/glow-tts' with backend coqui
11:56PM DBG Loading model in memory from file: /build/models/tts_models/en/ljspeech/glow-tts
11:56PM DBG Loading Model tts_models/en/ljspeech/glow-tts with gRPC (file: /build/models/tts_models/en/ljspeech/glow-tts) (backend: coqui): {backendString:coqui model:tts_models/en/ljspeech/glow-tts thr
eads:0 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0002c4000 externalBackends:map[] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false parallelRequests:false}
11:56PM ERR error: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/coqui. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS

with DEBUG=true

localai-1  | 2:05AM DBG Loading external backend: /build/backend/python/coqui/run.sh
localai-1  | 2:05AM DBG Loading GRPC Process: /build/backend/python/coqui/run.sh
localai-1  | 2:05AM DBG GRPC Service for tts_models/en/ljspeech/glow-tts will be running at: '127.0.0.1:44865'
localai-1  | 2:05AM DBG GRPC Service state dir: /tmp/go-processmanager1634371744
localai-1  | 2:05AM DBG GRPC Service Started
localai-1  | 2:05AM DBG GRPC(tts_models/en/ljspeech/glow-tts-127.0.0.1:44865): stderr Traceback (most recent call last):
localai-1  | 2:05AM DBG GRPC(tts_models/en/ljspeech/glow-tts-127.0.0.1:44865): stderr   File "/build/backend/python/coqui/coqui_server.py", line 15, in <module>
localai-1  | 2:05AM DBG GRPC(tts_models/en/ljspeech/glow-tts-127.0.0.1:44865): stderr     from TTS.api import TTS
localai-1  | 2:05AM DBG GRPC(tts_models/en/ljspeech/glow-tts-127.0.0.1:44865): stderr ModuleNotFoundError: No module named 'TTS'

To Reproduce Just pulled or build the v2.8.2 image. For info the inference with cuda works.

Expected behavior working coqui tts endpoint

blob42 commented 9 months ago

I think the reason is me using nvidia based build. the common-env/transformers-nvidia.yml does not contain TTS as a requirement

https://github.com/mudler/LocalAI/blob/9f2235c208b8a490f105774f984aa7225c4642b7/backend/python/common-env/transformers/transformers.yml#L36C1-L36C20

olariuromeo commented 9 months ago

try to build the backend first with this comand make clean and BUILD_GRPC_FOR_BACKEND_LLAMA=ON make GO_TAGS=stablediffusion,tts build or make GO_TAGS=stablediffusion,tts,tinydream build for all the bakends before to run docker, it would be more useful if you could give more details about your worklfow: .env, go and python version, docker-compose.yaml, dockerfile and your ssh comand. after when you have the GO_TAGS rebuild the image simply run docker-compose up --build or docker-compose up -d you ned to setup REBUILD=true and GO_TAGS= tts in your .env file in order to use tts .

blob42 commented 9 months ago

I can confirm the grpc backend works by adding the TTS dependency to the nvidia requirements file