mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
https://localai.io
MIT License
23.69k stars 1.81k forks source link

Failed to run gemma 7B #1806

Closed chenxinlong closed 6 months ago

chenxinlong commented 6 months ago

LocalAI version:

root@node1:~/cxl# crictl images | grep local-ai
quay.io/go-skynet/local-ai                  latest                    eb2dbec811326       16.2GB

Environment, CPU architecture, OS, and Version:

Describe the bug

  1. Download gemma 7b from huggingface
  2. Install local-ai on kubernetes cluster, pod yaml as below:
    apiVersion: v1
    kind: Pod
    metadata:
    name: localai
    spec:
    containers:
    - name: local-ai-container
      image: quay.io/go-skynet/local-ai:latest
      resources:
        requests:
          cpu: 4
          memory: 4Gi
        limits:
          cpu: 8
          memory: 8Gi
      ports:
        - containerPort: 8080
          hostPort: 8080
      args:
        - "--models-path"
        - "/models"
        - "--context-size"
        - "700"
        - "--threads"
        - "4"
      volumeMounts:
        - name: models-volume
          mountPath: /models
    volumes:
    - name: models-volume
      hostPath:
        path: /root/cxl/models
        type: Directory
  3. Send the request:
    root@node1:~# curl http://100.67.166.160:8080/v1/completions -H "Content-Type: application/json" -d '{ "model": "gemma-7b.gguf", "prompt": "Hi, how are you?", "temperature": 0.5 }'
    {"error":{"code":500,"message":"rpc error: code = Unknown desc = unimplemented","type":""}}

Expected behavior

Get a code 200 HTTP response.

Logs

@@@@@
Skipping rebuild
@@@@@
If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
see the documentation at: https://localai.io/basics/build/index.html
Note: See also https://github.com/go-skynet/LocalAI/issues/288
@@@@@
CPU info:
model name  : Intel(R) Xeon(R) CPU E5-2678 v3 @ 2.50GHz
flags       : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm cpuid_fault epb invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts md_clear flush_l1d
CPU:    AVX    found OK
CPU:    AVX2   found OK
CPU: no AVX512 found
@@@@@
5:56PM DBG no galleries to load
5:56PM INF Starting LocalAI using 4 threads, with models path: /models
5:56PM INF LocalAI version: v2.9.0 (ff88c390bb51d9567572815a63c575eb2e3dd062)
5:56PM INF Preloading models from /models

 ┌───────────────────────────────────────────────────┐
 │                   Fiber v2.50.0                   │
 │               http://127.0.0.1:8080               │
 │       (bound on host 0.0.0.0 and port 8080)       │
 │                                                   │
 │ Handlers ........... 105  Processes ........... 1 │
 │ Prefork ....... Disabled  PID ................ 14 │
 └───────────────────────────────────────────────────┘

5:57PM INF Trying to load the model 'gemma-7b.gguf' with all the available backends: llama-cpp, llama-ggml, llama, gpt4all, bert-embeddings, rwkv, whisper, stablediffusion, tinydream, piper, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/transformers-musicgen/run.sh, /build/backend/python/autogptq/run.sh, /build/backend/python/diffusers/run.sh, /build/backend/python/transformers/run.sh, /build/backend/python/exllama2/run.sh, /build/backend/python/exllama/run.sh, /build/backend/python/mamba/run.sh, /build/backend/python/petals/run.sh, /build/backend/python/vall-e-x/run.sh, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/coqui/run.sh, /build/backend/python/vllm/run.sh, /build/backend/python/bark/run.sh
5:57PM INF [llama-cpp] Attempting to load
5:57PM INF Loading model 'gemma-7b.gguf' with backend llama-cpp
5:57PM INF [llama-cpp] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
5:57PM INF [llama-ggml] Attempting to load
5:57PM INF Loading model 'gemma-7b.gguf' with backend llama-ggml
5:57PM INF [llama-ggml] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
5:57PM INF [llama] Attempting to load
5:57PM INF Loading model 'gemma-7b.gguf' with backend llama
5:57PM INF [llama] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
5:57PM INF [gpt4all] Attempting to load
5:57PM INF Loading model 'gemma-7b.gguf' with backend gpt4all
5:57PM INF [gpt4all] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
5:57PM INF [bert-embeddings] Attempting to load
5:57PM INF Loading model 'gemma-7b.gguf' with backend bert-embeddings
5:57PM INF [bert-embeddings] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
5:57PM INF [rwkv] Attempting to load
5:57PM INF Loading model 'gemma-7b.gguf' with backend rwkv
5:57PM INF [rwkv] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
5:57PM INF [whisper] Attempting to load
5:57PM INF Loading model 'gemma-7b.gguf' with backend whisper
5:57PM INF [whisper] Fails: could not load model: rpc error: code = Unknown desc = unable to load model
5:57PM INF [stablediffusion] Attempting to load
5:57PM INF Loading model 'gemma-7b.gguf' with backend stablediffusion
5:57PM INF [stablediffusion] Loads OK

Additional context

IDK how much memory/cpu resources is required, but if I don't set the pod cpu/memory limit local-ai will use up all the memory and make this machine hang up in first request.

devregnfo commented 6 months ago

this Rebuild and CMake args are definitive or we must set them each run?