matatonic / openedai-speech

An OpenAI API compatible text to speech server using Coqui AI's xtts_v2 and/or piper tts as the backend.
GNU Affero General Public License v3.0
192 stars 32 forks source link

Docker compose up failing (Mac M2) #5

Closed bcwilsondotcom closed 1 month ago

bcwilsondotcom commented 2 months ago

Running on a Mac M2..

docker compose up
[+] Running 0/1 ⠏ server Pulling 1.0s [+] Building 3.6s (10/12) docker:desktop-linux => [server internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 496B 0.0s => [server internal] load metadata for docker.io/library/python:3.11-slim 0.4s => [server internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [server stage-0 1/8] FROM docker.io/library/python:3.11-slim@sha256:6d2502238109c929569ae99355e28890c438cb11bc88ef02cd189c173b3db07c 0.0s => [server internal] load build context 0.0s => => transferring context: 963B 0.0s => CACHED [server stage-0 2/8] RUN apt-get update && apt-get install --no-install-recommends -y curl git ffmpeg 0.0s => CACHED [server stage-0 3/8] RUN mkdir -p /app/voices 0.0s => CACHED [server stage-0 4/8] WORKDIR /app 0.0s => CACHED [server stage-0 5/8] COPY *.txt /app/ 0.0s => ERROR [server stage-0 6/8] RUN --mount=type=cache,target=/root/.cache/pip pip install -r requirements.txt 3.2s

[server stage-0 6/8] RUN --mount=type=cache,target=/root/.cache/pip pip install -r requirements.txt: 0.768 Collecting git+https://github.com/huggingface/parler-tts.git (from -r requirements.txt (line 11)) 0.768 Cloning https://github.com/huggingface/parler-tts.git to /tmp/pip-req-build-dhep97ob 0.769 Running command git clone --filter=blob:none --quiet https://github.com/huggingface/parler-tts.git /tmp/pip-req-build-dhep97ob 1.933 Resolved https://github.com/huggingface/parler-tts.git to commit be2acc26bce06ae868c7d956ee1708e33e189dd4 1.938 Installing build dependencies: started 2.496 Installing build dependencies: finished with status 'done' 2.496 Getting requirements to build wheel: started 2.576 Getting requirements to build wheel: finished with status 'done' 2.576 Installing backend dependencies: started 2.852 Installing backend dependencies: finished with status 'done' 2.852 Preparing metadata (pyproject.toml): started 2.927 Preparing metadata (pyproject.toml): finished with status 'done' 3.016 Collecting fastapi (from -r requirements.txt (line 1)) 3.017 Using cached fastapi-0.111.0-py3-none-any.whl.metadata (25 kB) 3.056 Collecting uvicorn (from -r requirements.txt (line 2)) 3.056 Using cached uvicorn-0.29.0-py3-none-any.whl.metadata (6.3 kB) 3.074 Collecting piper-tts==1.2.0 (from -r requirements.txt (line 4)) 3.075 Using cached piper_tts-1.2.0-py3-none-any.whl.metadata (776 bytes) 3.099 ERROR: Could not find a version that satisfies the requirement onnxruntime-gpu (from versions: none) 3.099 ERROR: No matching distribution found for onnxruntime-gpu

failed to solve: process "/bin/sh -c pip install -r requirements.txt" did not complete successfully: exit code: 1

matatonic commented 2 months ago

Interesting, I have no idea if it will work on Mac. You can safely remove (or comment) onnxruntime-gpu from the requirements.txt though, it's for cuda accelerated piper-tts, which is not very important. I don't know if that will be enough though, I don't have a mac to test this with. Let me know how it goes!