mudler / LocalAI

:robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
https://localai.io
MIT License
21.73k stars 1.66k forks source link

failed loading model {"error":{"code":500,"message":"could not load model: rpc error"}} #1664

Closed tringler closed 5 months ago

tringler commented 5 months ago

LocalAI version: v2.7.0-3-g555bc02 (555bc0266530ceaa4edb3624fe970c88c497ffab)

Environment, CPU architecture, OS, and Version: Tried with GPU (Google Cloud n1-standard-8 - Azure Standard D8s v3 - 8 vcpus, 32 GiB memory, 1 GPU) and CPU (Azure Standard D8s v3 - 8 vcpus, 32 GiB memory)

@@@@@
CPU info:
model name      : Intel(R) Xeon(R) CPU @ 2.30GHz
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni
 pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat md_
clear arch_capabilities
CPU:    AVX    found OK
CPU:    AVX2   found OK
CPU: no AVX512 found
@@@@@
5:08PM DBG no galleries to load
5:08PM INF Starting LocalAI using 8 threads, with models path: /models
5:08PM INF LocalAI version: v2.7.0-3-g555bc02 (555bc0266530ceaa4edb3624fe970c88c497ffab)
5:08PM WRN [startup] failed resolving model '/usr/bin/local-ai'
5:08PM INF Preloading models from /models
5:08PM INF Model name: text-embedding-ada-002
5:08PM INF Model name: gpt-3.5-turbo

Describe the bug I'm trying to run the examples https://github.com/mudler/LocalAI/tree/master/examples/langchain-chroma and https://github.com/mudler/LocalAI/blob/master/examples/query_data/README.md

Same behaviour on both examples, so I'm just describing https://github.com/mudler/LocalAI/blob/master/examples/query_data/. It seems to be the same root cause? What I'm doing wrong here?

To Reproduce

# Clone LocalAI
git clone https://github.com/go-skynet/LocalAI

cd LocalAI/examples/query_data
cp ../../.env .env

wget https://huggingface.co/skeskinen/ggml/resolve/main/all-MiniLM-L6-v2/ggml-model-q4_0.bin -O models/bert
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

export OPENAI_API_BASE=http://localhost:8081/v1 (because of GCP agent running on :8080)
export OPENAI_API_KEY=sk-

docker-compose up -d --build

pip install -r ../langchain-chroma/requirements.txt
python store.py

Compose file

version: '3.6'

services:
  api:
    image: quay.io/go-skynet/local-ai:latest
    build:
      context: ../../
      dockerfile: Dockerfile
    ports:
      - 8081:8080
    env_file:
      - .env
    volumes:
      - ./models:/models:cached
    command: ["/usr/bin/local-ai"]
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]

Env file

## Set number of threads.
## Note: prefer the number of physical cores. Overbooking the CPU degrades performance notably.
THREADS=8

## Specify a different bind address (defaults to ":8080")
# ADDRESS=127.0.0.1:8080

## Default models context size
# CONTEXT_SIZE=512
#
## Define galleries.
## models will to install will be visible in `/models/available`
#GALLERIES=[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}]

## CORS settings
# CORS=true
# CORS_ALLOW_ORIGINS=*

## Default path for models
#
MODELS_PATH=/models

## Enable debug mode
DEBUG=true

## Disables COMPEL (Diffusers)
# COMPEL=0

## Enable/Disable single backend (useful if only one GPU is available)
SINGLE_ACTIVE_BACKEND=true

## Specify a build type. Available: cublas, openblas, clblas.
## cuBLAS: This is a GPU-accelerated version of the complete standard BLAS (Basic Linear Algebra Subprograms) library. It's provided by Nvidia and is part of their CUDA toolkit.
## OpenBLAS: This is an open-source implementation of the BLAS library that aims to provide highly optimized code for various platforms. It includes support for multi-threading and can be compiled to use hardware-specific
 features for additional performance. OpenBLAS can run on many kinds of hardware, including CPUs from Intel, AMD, and ARM.
## clBLAS:   This is an open-source implementation of the BLAS library that uses OpenCL, a framework for writing programs that execute across heterogeneous platforms consisting of CPUs, GPUs, and other processors. clBLAS 
is designed to take advantage of the parallel computing power of GPUs but can also run on any hardware that supports OpenCL. This includes hardware from different vendors like Nvidia, AMD, and Intel.
BUILD_TYPE=cuBLAS

## Uncomment and set to true to enable rebuilding from source
# REBUILD=true

## Enable go tags, available: stablediffusion, tts
## stablediffusion: image generation with stablediffusion
## tts: enables text-to-speech with go-piper 
## (requires REBUILD=true)
#
# GO_TAGS=stablediffusion

## Path where to store generated images
IMAGE_PATH=/tmp

## Specify a default upload limit in MB (whisper)
# UPLOAD_LIMIT

## List of external GRPC backends (note on the container image this variable is already set to use extra backends available in extra/)
# EXTERNAL_GRPC_BACKENDS=my-backend:127.0.0.1:9000,my-backend2:/usr/bin/backend.py

### Advanced settings ###
### Those are not really used by LocalAI, but from components in the stack ###
##
### Preload libraries
# LD_PRELOAD=

### Huggingface cache for models
# HUGGINGFACE_HUB_CACHE=/usr/local/huggingface

### Python backends GRPC max workers
### Default number of workers for GRPC Python backends.
### This actually controls wether a backend can process multiple requests or not.
# PYTHON_GRPC_MAX_WORKERS=1

### Define the number of parallel LLAMA.cpp workers (Defaults to 1)
# LLAMACPP_PARALLEL=1

### Enable to run parallel requests
# PARALLEL_REQUESTS=true

### Watchdog settings
###
# Enables watchdog to kill backends that are inactive for too much time
# WATCHDOG_IDLE=true
#
# Enables watchdog to kill backends that are busy for too much time
# WATCHDOG_BUSY=true
#
# Time in duration format (e.g. 1h30m) after which a backend is considered idle
# WATCHDOG_IDLE_TIMEOUT=5m
#
# Time in duration format (e.g. 1h30m) after which a backend is considered busy
# WATCHDOG_BUSY_TIMEOUT=5m

Logs

6:21PM DBG Loading model in memory from file: /models/bert
6:21PM DBG Loading Model bert with gRPC (file: /models/bert) (backend: bert-embeddings): {backendString:bert-embeddings model:bert threads:4 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc000534600 e
xternalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/pytho
n/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh petals:/build/backend/python/petals/run.sh sen
tencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/pyth
on/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:true parallelRequests:false}
6:21PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/bert-embeddings
6:21PM DBG GRPC Service for bert will be running at: '127.0.0.1:36703'
6:21PM DBG GRPC Service state dir: /tmp/go-processmanager17793052
6:21PM DBG GRPC Service Started
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr 2024/01/30 18:21:23 gRPC Server listening at 127.0.0.1:36703
6:21PM DBG GRPC Service Ready
6:21PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:bert ContextSize:0 Seed:0 NBatch:512 F16Memory:false MLock:fa
lse MMap:false VocabOnly:false LowVRAM:false Embeddings:true NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:4 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/bert Device: UseTri
ton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false 
DraftModel: AudioPath: Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type:}
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr bert_load_from_file: unknown tensor 'Jo?=P‡vko?=o?=o?=o?=o?=o?=LLo?=wjo?=o?=o?=o?=hR$jzho?=zWo?=o?=o?=xo?=ho?=po?=o?=o?=vvo?=uo?=o?=o?=X&o?=_‰o?=o?=o?=+ o?=o?=o?=s|Zo?=o?=`Z,BD
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr 6o?=;o?=o?=fP7wo?=|bo?=o?=o?=78o?=wo?=o?=ho?=$o?=o?=o?=o?=o?=o?=o?=D-o?=o?=o?=6%bjo?=Mo?={o?=SY‡7o?=
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr o?=o?=o?=#6V4o?=go?=o?=*o?=o?=ydo?=Xo?=W%io?=yo?=o?=o?=o?=;Vuo?=o?=o?=Xo?=o?=J$]&o?=o?=io?=>Wfo?=jbo?=o?=(So?=ko?=kWƒo?=o?=`o?=o?=o?=o?=o?=C…o?=o?=o?=o?=o?=o?=uo?=Lo?=Wgvo?=po?=
o?=[o?=Y$9X—o?=o?=o?=o?=o?=9To?=7xo?=m#o?=U6o?=[o?=o?=o?=o?= fZo?=Lo?=so?=$|@2o?=G:wo?=o?=o?=o?=o?=9o?=o?=o?=o?=o?=o?=[2`&‰o?=o?=o?=o?=o?=?o?=o?=o?=8xo?=Wo?=QFo?=o?=o?=yo?=o?=o?=%o?=o?=o?=o?={o?=z Zo?=gYeo?=o?=9$8o?=o?=
o?=o?=tWiKo?=o?=go?=Go?=}pw]o?=&Pko?=o?=TX<T:so?=#o?=dN€y6o?=C›o?=o?=zuo?=w\%o?=o?=7o?=o?=igo?=Uo?=tTo?=Xo?=o?=o?=o?=o?=N$o?=OWVo?=o?=o?=Yxo?=o?=o?=o?=$zP)o?=o?=o?=o?==o?=o?=o?=sI‡o?=z6#\[o?=Y;o?=o?=Uo?=o?=o?=o?=o?=o?=V/o
o?=o?=kSVo?=^/o?=Iœ o?=No?=o?=vo?=o?=o?=o?=tGŒN$o?=Bo?=o?=o?=o?=vo?=o?=Wo?=o?=W\€h(%`o?=wo?=o?=|o?=EVl1WnKo?=&o?=o?=o?=8\\o?=o?=]o?=HlVo?=o?=o?=o?=Io?=o?=wHVXo?=Yo?=e/*Vwo?=(o?=o?=fxo?=o?=o?=io?=xho?=gjgoio?=%I4xo?=E„U6
o?=o?=o?=       o?={oo?=o?=o?=o?=ko?=o?=_ 4o?=o?=o?=o?=o?=vN&o?=Xo?=doo?=o?=ko?=o?=6Yho?=o?=o?=o?=o?=\o?=6o?=x}fMho?=no?=o?=o?=%16o?=Mo?=o?=o?=o?=o?=
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr j'Io?=vo?=o?=o?=o?=o?=o?=~jgo?=GfFwo?=UPNž&o?=po?=vo?=Yhyo?=o?=uX…irG/&o?=o?o?=o?=&o?=o?=o?=o?=vo?=m]wHq%o?=o?=`Jžo?=Go?=o?=o?=o?=Xo?=[Fo?=o?='o?=Tšyyo?=eo?=o?=jo?=vo?=yo?=uo?=$
ko?=t@ :Fo?=e{ mKQo?=o?=Jo?=mo?=Mo?=W Io?=wo?=o?=o?=o?=o?=o?=wS$eo?=o?=ev8{zo?=I{lo?=ks)o?=o?=zo?=o?=o?=o?=o?=wo?=vhjo?=pXo?=&o?=Ko?=o?=o?=o?=o?=o?=do?=So?=o?=to?=io?=ijo?={o?6eo?=o?=o?=ixo?=o?=o?=o?=L‹o?=o?=o?=jyo?=\o?=o?
=Hwo?=o?=go?=!o?=R/o?=o?=tiGo?=o?=o?=yo?=o?=#o?=eo?=o?=o?=io?=o?=ko?=o?=Vo?=o?='o?=Vo?=W,GŠo?=9o?=*o?=o?=go?=o?=o?=o?=o?=Bho?=o?=6*o?=o?=Fo?=to?=o?="o?=:o?=z*y    o?=wo?=EXo?=o?="o?=W-xjnvo?=o?=o?=]Z‹o?=yo?=o?=o?=o?=yL
&o?=kj7Hfyo?=o?=o?=o?=yo?=#o?='mo?=^(T€V Gyo?=o?=lo?=m"{o?=Po?=o?=o?=o?=o?=o?=o?=@o?=HkUo?=o?=o?=ho?=7o?=o?=so?=ybCo?=o?=Jo?=wo?=o?=fkfxe)Šzo?=o?=To?=o?=o?=W’o?=       o?=h{o?=o?=o?=o?=n)o?=/o?=\{o?=%V‡o?=o?=Go?=kYo?=o?=o?=
o?=io?=o?=Mo?='o?=J€o?=Ve&Šo?=o?=o?=xeo?=dj8%o?=o?=o?=po?=SyZo?=o?=:\4o?=o?=[%urio?=o?=o?=|o?=o?=o?=&o?=o?=o?=${n$‚Z0m6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr o?=o?=o?=Cto?=yTuo?=o?=*$o?=o?=j}o?=z_o?=30o?=PvCŒP&Yo?=eo
?=g;Ko?=o?=o?=o?=o?=o?=lo?=o?=o?=vGo?=|o?=fYo?=0o?=o?=o?=vo?=o?=h&>[S
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr o?=igZ9o?=EvGo?=xVo?=o?={o?=Uo?=;o?=G{vo?=o?=
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr o?=J+&o?=No?=yo?=o?=6o?=  Xo?=L% Xo?=y!Xo?=o?=o?=pJUWMo?=o?=<y%o?=o?=o?=o?=jo?=  o?=%o?= o?=zWXGo?=zo?=o?=9o?=o?=o?=o?=o?=xf VDF&o?=o?=o?=%o?=o?=ho?=o?=o?=xo?=zo?=o?=x{o?=w]o?
=yzWo?=o?=o?=o?=o?=o?=wyo?=Hfo?=o?=#ko?=o?=o?=g3ƒ|o?=o?=o?=Wo?=o?=o?=o?=o?=ho?=o?=|o?=Nšz'o?=o?=vo?=o?=?o?=9o?=o?=yo?=o?=o?=o?=o?=o?=o?=v[do?=h}%o?=o?=o?=o?=o?=o?=o?=G‡{o?o?=ho?=o?=$o?=vo?=o?=xo?=o?=o?=wo?=o?=yo?=o?=dTo?=o
?=$Ho?=o?=o?=o?=o?=xo?=xxQo?=o?=
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr o?=PG#o?=oFo?=
ko?o?=uo?=jo?=o?=o?=o?=ho?=o?=vo?=o?=vko?=o?=$io?=Xo?=Xwo?=o?=7o?=xo?=o?=o?=$U7F–iH€{o?=Ho?=wo?=vo?=D$jo?=fWRxo?=o?=Jezo?=o?=o?=o?=o?=o?=o?=Hje\V…o?=o?=o?=}jso?=)%o?=o?=o?=o?=Dyo?=Vo?=o?=o?=Go?=Yo?="o?=_o?=o?=o?=X%yXc
gvo?=wo?=o?=I˜o?=o?=o?=h…†o?=o?=o?=o?=wFˆ
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr oo?=ho?=o?=o?=o?=o?=o?=o?=o?=o?=o?=yo?=o?=o?=o?=&o?=8o?=o?=kpo?=hio?=o?=X:o?=(o?=&o?=Vo?=o?=eV&o?=Wo?=%o?=o?=ujo?=o?=H o?=o?=o?=o?=o?=o?=o?=o?=o?=o?=o?=Wkx%o?=o?=o?=o?=o?=o?=o
?=o?=o?=o?=o?=xo?=zo?=
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr ho?=&&o?=o?=o?=Xo?=Vo?=jo?=o?=B&ce' in model file
6:21PM DBG GRPC(bert-127.0.0.1:36703): stderr bert_bootstrap: failed to load model from '/models/bert'

completeLog.txt

tringler commented 5 months ago

I was able to solve it by:


   "url": "github:go-skynet/model-gallery/bert-embeddings.yaml",
   "name": "text-embedding-ada-002"
 }```