mudler / LocalAI

:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
https://localai.io
MIT License
21.65k stars 1.66k forks source link

can not load ggml-large-v2-q5_0.bin at quay.io/go-skynet/local-ai:master-cublas-cuda12-ffmpeg #2323

Open edisonzf2020 opened 1 month ago

edisonzf2020 commented 1 month ago

**LocalAI version: 2.15.0

quay.io/go-skynet/local-ai:master-cublas-cuda12-ffmpeg **Environment, CPU architecture, OS, and Version: ubuntu 22.04

Describe the bug

1:42PM INF Loading model 'ggml-large-v2-q5_0.bin' with backend whisper
1:42PM ERR Server error error="rpc error: code = Unavailable desc = error reading from server: EOF" ip=192.168.1.28 latency=10.020056047s method=POST status=500 url=/v1/audio/transcriptions

To Reproduce

curl http://192.168.1.19:8090/v1/audio/transcriptions -H "Content-Type: multipart/form-data" -F file="@$PWD/gb1.ogg" -F model="whisper-1"

Expected behavior

Logs

Additional context

edisonzf2020 commented 1 month ago

docker build self image=core , The q5_0 model is ok.

8:29AM INF Loading model 'ggml-large-v3-q5_0.bin' with backend whisper
8:30AM INF Success ip=127.0.0.1 latency="27.248µs" method=GET status=200 url=/readyz
8:30AM INF Success ip=192.168.1.28 latency=48.973942329s method=POST status=200 url=/v1/audio/transcriptions
QuentinDrt commented 1 month ago

Hello, I have the same issue with the aio docker image v2.16.0-aio-gpu-nvidia-cuda-12. This model doesn't work : https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium-q5_0.bin But this one works: https://huggingface.co/ggerganov/whisper.cpp/resolve/main/ggml-medium.bin