mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities
https://localai.io
MIT License
22.16k stars 1.69k forks source link

rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1">127.0.0.1:37785: connect: connection refused #771

Closed luoweb closed 4 months ago

luoweb commented 1 year ago

LocalAI version:

V1.21 root@63429046747f:/build# ./local-ai --version LocalAI version 4548473 (4548473acf4f57ff149492272cc1fdba3521f83a) llmai-api-1 | 3:04AM DBG Loading model '

Environment, CPU architecture, OS, and Version: Intel X86 centos

Describe the bug Grpc error

To Reproduce

Expected behavior Output response

Logs openllama7b' greedly llmai-api-1 | 3:04AM DBG [llama] Attempting to load llmai-api-1 | 3:04AM DBG Loading model llama from openllama7b llmai-api-1 | 3:04AM DBG Loading model in memory from file: /models/openllama7b llmai-api-1 | 3:04AM DBG Loading GRPC Model%!(EXTRA string=llama, model.Options={llama openllama7b 4 /tmp/localai/backend_data 0xc0000400b0 0xc000296a20}) llmai-api-1 | 3:04AM DBG Loading GRPC Process%!(EXTRA string=/tmp/localai/backend_data/backend-assets/grpc/llama) llmai-api-1 | 3:04AM DBG GRPC Service for 'llama' (openllama7b) will be running at: 'localhost:37785' llmai-api-1 | 3:04AM DBG GRPC Service Started llmai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1">127.0.0.1:37785: connect: connection refused" llmai-api-1 | 3:04AM DBG GRPC(llama-openllama7b-localhost:37785): stderr 2023/07/19 03:04:00 gRPC Server listening at 127.0.0.1:37785 llmai-api-1 | 3:04AM DBG GRPC Service Ready llmai-api-1 | 3:04AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:/models/openllama7b ContextSize:512 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:4 LibrarySearchPath:}

Additional context

luoweb commented 1 year ago

LocalAI version:

V1.21 root@63429046747f:/build# ./local-ai --version LocalAI version 4548473 (4548473) llmai-api-1 | 3:04AM DBG Loading model '

Environment, CPU architecture, OS, and Version: Intel X86 centos

Describe the bug Grpc error

To Reproduce

Expected behavior Output response

Logs openllama7b' greedly llmai-api-1 | 3:04AM DBG [llama] Attempting to load llmai-api-1 | 3:04AM DBG Loading model llama from openllama7b llmai-api-1 | 3:04AM DBG Loading model in memory from file: /models/openllama7b llmai-api-1 | 3:04AM DBG Loading GRPC Model%!(EXTRA string=llama, model.Options={llama openllama7b 4 /tmp/localai/backend_data 0xc0000400b0 0xc000296a20}) llmai-api-1 | 3:04AM DBG Loading GRPC Process%!(EXTRA string=/tmp/localai/backend_data/backend-assets/grpc/llama) llmai-api-1 | 3:04AM DBG GRPC Service for 'llama' (openllama7b) will be running at: 'localhost:37785' llmai-api-1 | 3:04AM DBG GRPC Service Started llmai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1">127.0.0.1:37785: connect: connection refused" llmai-api-1 | 3:04AM DBG GRPC(llama-openllama7b-localhost:37785): stderr 2023/07/19 03:04:00 gRPC Server listening at 127.0.0.1:37785 llmai-api-1 | 3:04AM DBG GRPC Service Ready llmai-api-1 | 3:04AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:/models/openllama7b ContextSize:512 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:4 LibrarySearchPath:}

Additional context

the problem was solved, the error is normal, the word err misunderstood . May be the log should be more clear, when the model not config correctly , it generate many log like "rpc error: code = Unavailable desc = connection error: desc = "

localai-bot commented 1 year ago

:warning::warning::warning::warning::warning:

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me, I don't support (yet) replying!

:warning::warning::warning::warning::warning:

The issue seems to be related to the GRPC service not starting correctly on the LocalAI server. The error message indicates that there was a connection refused when trying to connect to the GRPC port on localhost. This could be due to a misconfiguration of the GRPC server or a conflict with other services running on the same port. To troubleshoot this issue, you can try the following steps:

  1. Check if the GRPC server is properly configured and started by looking at the logs generated when starting LocalAI. You may need to enable debug mode (--debug or DEBUG=true) to see more detailed logs.
  2. Ensure that there are no other services running on the same port as the GRPC service. You can check this by running a netstat command and looking for any active connections on the relevant port.
  3. Check if the LocalAI model is properly loaded and configured. You may need to verify that the model is correctly loaded and that the required dependencies are installed.
  4. If the issue persists, you may need to file a bug report with the LocalAI project or seek assistance from the community on the relevant forums or chat channels.

Sources:

Florian-Bloechinger commented 1 year ago

I had the same problem the model I used was alpaca.13b.ggmlv3.q8_0.bin with another model it worked ggml-model-gpt4all-falcon-q4_0.bin. Could it be because the alpaca.13b.ggmlv3.q8_0.bin model is a GPU model?

eyalsofer commented 1 year ago

@luoweb - can you share what was configured incorrectly? im seeing this err too: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp [::1]:43883: connect: connection refused" can i just ignore it? i'm using the default configuration

gris-gris commented 1 year ago

Same problem as above localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:35733: connect: connection refused" Default guide: Example: Use GPT4ALL-J model with docker-compose

rajivmehtaflex commented 1 year ago

I'm try to use this fantastic technology in google colab with following command. Because google colab is not support docker and I want use GPU. It's working fine on gitpod,only thing is that it's too slow. so I move to google colab.

./local-ai --models-path ./models --address 127.0.0.1:8889  --threads 4

I'm using dolly-v2-12b-q4_0.bin

But I got following response.

rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:44399: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:34133: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:37763: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:46833: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:39095: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33899: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:45317: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:32935: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43033: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:41829: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:37653: connect: connection refused" rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:40749: connect: connection refused"

finger42 commented 1 year ago

Hello @gris-gris

Same problem as above localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:35733: connect: connection refused" Default guide: Example: Use GPT4ALL-J model with docker-compose

I did noticed a similar problem on my side when the tmpl files are mixed up.

I am using container: local-ai:v1.21.0

Can you try to add the following line to your docker-compose ?

   environment:
      - DEBUG=true
      - PRELOAD_MODELS=[{"url":"github:go-skynet/model-gallery/gpt4all-j.yaml"}]
      - MODELS_PATH=/models
      - REBUILD=false
  volumes:
      - ./models:/models:cached 

docker-compose up looks now like:

api_1  | @@@@@
api_1  | Skipping rebuild
api_1  | @@@@@
api_1  | If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
api_1  | If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
api_1  | CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
api_1  | see the documentation at: https://localai.io/basics/build/index.html
api_1  | Note: See also https://github.com/go-skynet/LocalAI/issues/288
api_1  | @@@@@
api_1  | 9:01AM DBG no galleries to load
api_1  | 9:01AM INF Starting LocalAI using 14 threads, with models path: /models
api_1  | 9:01AM INF LocalAI version: v1.21.0 (fb6cce487fb53d9de1c1a6b3414261f52b5cdbe0)
api_1  | 9:01AM DBG Model: gpt4all-j (config: {PredictionOptions:{Model:ggml-gpt4all-j.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.2 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0} Name:gpt4all-j StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 F16:false NUMA:false Threads:0 Debug:false Roles:map[] Embeddings:false Backend:gpt4all-j TemplateConfig:{Completion:gpt4all-completion Functions: Chat:gpt4all-chat Edit:} MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false TensorSplit: MainGPU: ImageGenerationAssets: PromptCachePath: PromptCacheAll:false PromptCacheRO:false Grammar: PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:}})
api_1  | 9:01AM DBG Extracting backend assets files to /tmp/localai/backend_data
api_1  | 9:01AM DBG Checking "ggml-gpt4all-j.bin" exists and matches SHA
api_1  | 9:01AM DBG File "ggml-gpt4all-j.bin" already exists and matches the SHA. Skipping download
api_1  | 9:01AM DBG Prompt template "gpt4all-completion" written
api_1  | 9:01AM DBG Prompt template "gpt4all-chat" written
api_1  | 9:01AM DBG Written config file /models/gpt4all-j.yaml
api_1  | 
api_1  |  ┌───────────────────────────────────────────────────┐ 
api_1  |  │                   Fiber v2.48.0                   │ 
api_1  |  │               http://127.0.0.1:8080               │ 
api_1  |  │       (bound on host 0.0.0.0 and port 8080)       │ 
api_1  |  │                                                   │ 
api_1  |  │ Handlers ............ 33  Processes ........... 1 │ 
api_1  |  │ Prefork ....... Disabled  PID ................. 7 │ 
api_1  |  └───────────────────────────────────────────────────┘ 

and my models folder on the host looks like:

-rw-rw-r--  1 root    root    3785248281 Jul 23 10:36 ggml-gpt4all-j.bin
-rw-r--r--  1 root    root           179 Jul 23 11:01 gpt4all-chat.tmpl
-rw-r--r--  1 root    root            57 Jul 23 11:01 gpt4all-completion.tmpl
-rw-r--r--  1 root    root           202 Jul 23 11:01 gpt4all-j.yaml

To test:

LOCALAI=http://localhost:8080
curl $LOCALAI/v1/chat/completions -H "Content-Type: application/json" -d '{
     "model": "gpt4all-j", 
     "messages": [{"role": "user", "content": "How are you?"}],
     "temperature": 2 
   }'

Hope that helps.

DonXtasy commented 1 year ago

Im getting the same error!

2023-07-23 12:09:45 ┌───────────────────────────────────────────────────┐ 2023-07-23 12:09:45 │ Fiber v2.48.0 │ 2023-07-23 12:09:45 │ http://127.0.0.1:8080 │ 2023-07-23 12:09:45 │ (bound on host 0.0.0.0 and port 8080) │ 2023-07-23 12:09:45 │ │ 2023-07-23 12:09:45 │ Handlers ............ 31 Processes ........... 1 │ 2023-07-23 12:09:45 │ Prefork ....... Disabled PID ................. 7 │ 2023-07-23 12:09:45 └───────────────────────────────────────────────────┘ 2023-07-23 12:09:45 2023-07-23 12:11:51 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:42001: connect: connection refused" 2023-07-23 12:11:54 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:42001: connect: connection refused" 2023-07-23 12:11:54 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:42391: connect: connection refused" 2023-07-23 12:11:59 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:42391: connect: connection refused" 2023-07-23 12:11:59 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:34697: connect: connection refused" 2023-07-23 12:12:04 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:34697: connect: connection refused" 2023-07-23 12:12:04 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:41587: connect: connection refused" 2023-07-23 12:12:18 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:41587: connect: connection refused" 2023-07-23 12:12:18 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43633: connect: connection refused"

Thats my docker-compose

version: '3.6'

services: api: image: quay.io/go-skynet/local-ai:latest build: context: . dockerfile: Dockerfile ports:

After 1min or so i get a

Error: Request failed with status code 500

in Flowise

DonXtasy commented 1 year ago

This is the Debug:

2023-07-23 12:29:27 10:29AM DBG Loading model bert-embeddings from bert-MiniLM-L6-v2q4_0.bin 2023-07-23 12:29:27 10:29AM DBG Model already loaded in memory: bert-MiniLM-L6-v2q4_0.bin 2023-07-23 12:29:27 10:29AM DBG GRPC Model not responding%!(EXTRA string=bert-MiniLM-L6-v2q4_0.bin) 2023-07-23 12:29:27 10:29AM DBG GRPC Process is not responding%!(EXTRA string=bert-MiniLM-L6-v2q4_0.bin) 2023-07-23 12:29:27 10:29AM DBG Loading model in memory from file: /models/bert-MiniLM-L6-v2q4_0.bin 2023-07-23 12:29:27 10:29AM DBG Loading GRPC Model%!(EXTRA string=bert-embeddings, model.Options={bert-embeddings bert-MiniLM-L6-v2q4_0.bin 4 /tmp/localai/backend_data 0xc000126000 0xc00109fa70}) 2023-07-23 12:29:27 10:29AM DBG Loading GRPC Process%!(EXTRA string=/tmp/localai/backend_data/backend-assets/grpc/bert-embeddings) 2023-07-23 12:29:27 10:29AM DBG GRPC Service for 'bert-embeddings' (bert-MiniLM-L6-v2q4_0.bin) will be running at: 'localhost:38181' 2023-07-23 12:29:27 10:29AM DBG GRPC Service Started 2023-07-23 12:29:27 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:38181: connect: connection refused" 2023-07-23 12:29:27 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023/07/23 10:29:27 gRPC Server listening at 127.0.0.1:38181 2023-07-23 12:29:28 10:29AM DBG GRPC Service Ready 2023-07-23 12:29:28 10:29AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:/models/bert-MiniLM-L6-v2q4_0.bin ContextSize:0 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:true NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:4 LibrarySearchPath:} 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bert_load_from_file: loading model from '/models/bert-MiniLM-L6-v2q4_0.bin' - please wait ... 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bert_load_from_file: n_vocab = 30522 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bert_load_from_file: n_max_tokens = 512 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bert_load_from_file: n_embd = 384 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bert_load_from_file: n_intermediate = 1536 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bert_load_from_file: n_head = 12 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bert_load_from_file: n_layer = 6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bert_load_from_file: f16 = 2 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bert_load_from_file: ggml ctx size = 13.57 MB 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr fatal error: unexpected signal during runtime execution 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr [signal SIGSEGV: segmentation violation code=0x1 addr=0x48 pc=0x834f9c] 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime stack: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.throw({0x93e51d?, 0x82400b?}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/panic.go:1047 +0x5d fp=0x7f3c22a43b70 sp=0x7f3c22a43b40 pc=0x44529d 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.sigpanic() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/signal_unix.go:825 +0x3e9 fp=0x7f3c22a43bd0 sp=0x7f3c22a43b70 pc=0x45b749 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr goroutine 11 [syscall]: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.cgocall(0x8024a0, 0xc0000e5890) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/cgocall.go:157 +0x5c fp=0xc0000e5868 sp=0xc0000e5830 pc=0x4143bc 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr github.com/go-skynet/go-bert%2ecpp._Cfunc_bert_embeddings(0x1197050, 0x7f3c0c000b60, 0xc000400000) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr _cgo_gotypes.go:127 +0x4c fp=0xc0000e5890 sp=0xc0000e5868 pc=0x800b2c 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr github.com/go-skynet/go-bert%2ecpp.(Bert).Embeddings.func1(0x7f3c10000b60?, 0x4?, 0x1197050?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /build/go-bert/gobert.go:38 +0x74 fp=0xc0000e58d8 sp=0xc0000e5890 pc=0x8012f4 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr github.com/go-skynet/go-bert%2ecpp.(Bert).Embeddings(0xc00009a9c0?, {0xc0000f1400, 0x3b5}, {0xc0000e59c0, 0x1, 0x0?}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /build/go-bert/gobert.go:38 +0x115 fp=0xc0000e5940 sp=0xc0000e58d8 pc=0x8010f5 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr github.com/go-skynet/LocalAI/pkg/grpc/llm/bert.(Embeddings).Embeddings(0xc0000e5a00?, 0x41baa5?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /build/pkg/grpc/llm/bert/bert.go:32 +0xbf fp=0xc0000e59d8 sp=0xc0000e5940 pc=0x801bbf 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr github.com/go-skynet/LocalAI/pkg/grpc.(server).Embedding(0x91e720?, {0xc0000e3180?, 0x50c6e6?}, 0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /build/pkg/grpc/server.go:33 +0x28 fp=0xc0000e5a10 sp=0xc0000e59d8 pc=0x7ffa88 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr github.com/go-skynet/LocalAI/pkg/grpc/proto._Backend_Embedding_Handler({0x8f2fc0?, 0xc0000a3e80}, {0x9d5d30, 0xc0001e3560}, 0xc0001a7ea0, 0x0) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /build/pkg/grpc/proto/backend_grpc.pb.go:275 +0x170 fp=0xc0000e5a68 sp=0xc0000e5a10 pc=0x7fe2f0 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc.(Server).processUnaryRPC(0xc0001da3c0, {0x9d89b8, 0xc000256820}, 0xc00018b440, 0xc0001e2f30, 0xc89688, 0x0) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/server.go:1337 +0xdf3 fp=0xc0000e5e48 sp=0xc0000e5a68 pc=0x7e6db3 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc.(Server).handleStream(0xc0001da3c0, {0x9d89b8, 0xc000256820}, 0xc00018b440, 0x0) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/server.go:1714 +0xa36 fp=0xc0000e5f68 sp=0xc0000e5e48 pc=0x7ebed6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc.(Server).serveStreams.func1.1() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/server.go:959 +0x98 fp=0xc0000e5fe0 sp=0xc0000e5f68 pc=0x7e47b8 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goexit() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000e5fe8 sp=0xc0000e5fe0 pc=0x476e81 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr created by google.golang.org/grpc.(Server).serveStreams.func1 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/server.go:957 +0x18c 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr goroutine 1 [IO wait]: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc0001c7b68 sp=0xc0001c7b48 pc=0x447ff6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.netpollblock(0x7f3c21a28d78?, 0x413a4f?, 0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/netpoll.go:527 +0xf7 fp=0xc0001c7ba0 sp=0xc0001c7b68 pc=0x440937 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr internal/poll.runtime_pollWait(0x7f3c21a1fef8, 0x72) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/netpoll.go:306 +0x89 fp=0xc0001c7bc0 sp=0xc0001c7ba0 pc=0x471a29 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr internal/poll.(pollDesc).wait(0xc0001a4380?, 0x4?, 0x0) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc0001c7be8 sp=0xc0001c7bc0 pc=0x4dfd32 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr internal/poll.(pollDesc).waitRead(...) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr internal/poll.(FD).Accept(0xc0001a4380) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/internal/poll/fd_unix.go:614 +0x2bd fp=0xc0001c7c90 sp=0xc0001c7be8 pc=0x4e563d 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr net.(netFD).accept(0xc0001a4380) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/net/fd_unix.go:172 +0x35 fp=0xc0001c7d48 sp=0xc0001c7c90 pc=0x5f6b35 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr net.(TCPListener).accept(0xc000012630) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/net/tcpsock_posix.go:148 +0x25 fp=0xc0001c7d70 sp=0xc0001c7d48 pc=0x60f3a5 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr net.(TCPListener).Accept(0xc000012630) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/net/tcpsock.go:297 +0x3d fp=0xc0001c7da0 sp=0xc0001c7d70 pc=0x60e49d 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc.(Server).Serve(0xc0001da3c0, {0x9d55c0?, 0xc000012630}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/server.go:821 +0x475 fp=0xc0001c7ee8 sp=0xc0001c7da0 pc=0x7e33d5 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr github.com/go-skynet/LocalAI/pkg/grpc.StartServer({0x7ffe62acae5f?, 0xc000024190?}, {0x9d7f30?, 0xc0000142a0}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /build/pkg/grpc/server.go:121 +0x125 fp=0xc0001c7f50 sp=0xc0001c7ee8 pc=0x800705 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr main.main() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /build/cmd/grpc/bert-embeddings/main.go:19 +0x85 fp=0xc0001c7f80 sp=0xc0001c7f50 pc=0x802085 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.main() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:250 +0x207 fp=0xc0001c7fe0 sp=0xc0001c7f80 pc=0x447bc7 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goexit() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0001c7fe8 sp=0xc0001c7fe0 pc=0x476e81 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr goroutine 2 [force gc (idle)]: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc00008cfb0 sp=0xc00008cf90 pc=0x447ff6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goparkunlock(...) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:387 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.forcegchelper() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:305 +0xb0 fp=0xc00008cfe0 sp=0xc00008cfb0 pc=0x447e30 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goexit() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00008cfe8 sp=0xc00008cfe0 pc=0x476e81 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr created by runtime.init.6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:293 +0x25 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr goroutine 3 [GC sweep wait]: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc00008d780 sp=0xc00008d760 pc=0x447ff6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goparkunlock(...) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:387 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.bgsweep(0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/mgcsweep.go:278 +0x8e fp=0xc00008d7c8 sp=0xc00008d780 pc=0x4341ee 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gcenable.func1() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/mgc.go:178 +0x26 fp=0xc00008d7e0 sp=0xc00008d7c8 pc=0x4294a6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goexit() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00008d7e8 sp=0xc00008d7e0 pc=0x476e81 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr created by runtime.gcenable 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/mgc.go:178 +0x6b 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr goroutine 4 [GC scavenge wait]: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gopark(0xc00002e070?, 0x9ce7f0?, 0x1?, 0x0?, 0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc00008df70 sp=0xc00008df50 pc=0x447ff6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goparkunlock(...) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:387 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.(scavengerState).park(0xcd5580) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/mgcscavenge.go:400 +0x53 fp=0xc00008dfa0 sp=0xc00008df70 pc=0x432113 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.bgscavenge(0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/mgcscavenge.go:628 +0x45 fp=0xc00008dfc8 sp=0xc00008dfa0 pc=0x4326e5 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gcenable.func2() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/mgc.go:179 +0x26 fp=0xc00008dfe0 sp=0xc00008dfc8 pc=0x429446 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goexit() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00008dfe8 sp=0xc00008dfe0 pc=0x476e81 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr created by runtime.gcenable 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/mgc.go:179 +0xaa 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr goroutine 5 [finalizer wait]: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gopark(0x1a0?, 0xcd5aa0?, 0x60?, 0x78?, 0xc00008c770?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc00008c628 sp=0xc00008c608 pc=0x447ff6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.runfinq() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc00008c7e0 sp=0xc00008c628 pc=0x4284e7 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goexit() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00008c7e8 sp=0xc00008c7e0 pc=0x476e81 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr created by runtime.createfing 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/mfinal.go:163 +0x45 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr goroutine 39 [select]: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gopark(0xc00031bf00?, 0x2?, 0xe3?, 0x34?, 0xc00031bed4?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc00031bd60 sp=0xc00031bd40 pc=0x447ff6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.selectgo(0xc00031bf00, 0xc00031bed0, 0x6198c9?, 0x0, 0xc000298000?, 0x1) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/select.go:327 +0x7be fp=0xc00031bea0 sp=0xc00031bd60 pc=0x457bde 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc/internal/transport.(controlBuffer).get(0xc00027c230, 0x1) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/internal/transport/controlbuf.go:418 +0x115 fp=0xc00031bf30 sp=0xc00031bea0 pc=0x7588b5 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc/internal/transport.(loopyWriter).run(0xc00013a700) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/internal/transport/controlbuf.go:552 +0x91 fp=0xc00031bf90 sp=0xc00031bf30 pc=0x759031 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func2() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/internal/transport/http2_server.go:341 +0xda fp=0xc00031bfe0 sp=0xc00031bf90 pc=0x770a1a 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goexit() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00031bfe8 sp=0xc00031bfe0 pc=0x476e81 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/internal/transport/http2_server.go:338 +0x1bb3 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr goroutine 40 [select]: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gopark(0xc000089f70?, 0x4?, 0x0?, 0x0?, 0xc000089ec0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000089d08 sp=0xc000089ce8 pc=0x447ff6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.selectgo(0xc000089f70, 0xc000089eb8, 0x0?, 0x0, 0x0?, 0x1) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/select.go:327 +0x7be fp=0xc000089e48 sp=0xc000089d08 pc=0x457bde 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc/internal/transport.(http2Server).keepalive(0xc000256820) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/internal/transport/http2_server.go:1155 +0x233 fp=0xc000089fc8 sp=0xc000089e48 pc=0x7780f3 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func4() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/internal/transport/http2_server.go:344 +0x26 fp=0xc000089fe0 sp=0xc000089fc8 pc=0x770906 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goexit() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000089fe8 sp=0xc000089fe0 pc=0x476e81 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/internal/transport/http2_server.go:344 +0x1bf8 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr goroutine 41 [IO wait]: 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.gopark(0x100000008?, 0xb?, 0x0?, 0x0?, 0x6?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc00025faa0 sp=0xc00025fa80 pc=0x447ff6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.netpollblock(0x4c5165?, 0x413a4f?, 0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/netpoll.go:527 +0xf7 fp=0xc00025fad8 sp=0xc00025faa0 pc=0x440937 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr internal/poll.runtime_pollWait(0x7f3c21a1fe08, 0x72) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/netpoll.go:306 +0x89 fp=0xc00025faf8 sp=0xc00025fad8 pc=0x471a29 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr internal/poll.(pollDesc).wait(0xc000250080?, 0xc000290000?, 0x0) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc00025fb20 sp=0xc00025faf8 pc=0x4dfd32 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr internal/poll.(pollDesc).waitRead(...) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr internal/poll.(FD).Read(0xc000250080, {0xc000290000, 0x8000, 0x8000}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/internal/poll/fd_unix.go:167 +0x299 fp=0xc00025fbb8 sp=0xc00025fb20 pc=0x4e1119 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr net.(netFD).Read(0xc000250080, {0xc000290000?, 0x1060100000000?, 0x8?}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/net/fd_posix.go:55 +0x29 fp=0xc00025fc00 sp=0xc00025fbb8 pc=0x5f49a9 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr net.(conn).Read(0xc000254018, {0xc000290000?, 0x18?, 0xc000100000?}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/net/net.go:183 +0x45 fp=0xc00025fc48 sp=0xc00025fc00 pc=0x6064e5 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr net.(TCPConn).Read(0x800010601?, {0xc000290000?, 0x0?, 0xc00025fca8?}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr :1 +0x29 fp=0xc00025fc78 sp=0xc00025fc48 pc=0x6195c9 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr bufio.(Reader).Read(0xc000262300, {0xc00027e120, 0x9, 0x0?}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/bufio/bufio.go:237 +0x1bb fp=0xc00025fcb0 sp=0xc00025fc78 pc=0x56e21b 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr io.ReadAtLeast({0x9d2040, 0xc000262300}, {0xc00027e120, 0x9, 0x9}, 0x9) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/io/io.go:332 +0x9a fp=0xc00025fcf8 sp=0xc00025fcb0 pc=0x4bf0da 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr io.ReadFull(...) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/io/io.go:351 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr golang.org/x/net/http2.readFrameHeader({0xc00027e120?, 0x9?, 0xc00012e078?}, {0x9d2040?, 0xc000262300?}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/golang.org/x/net@v0.10.0/http2/frame.go:237 +0x6e fp=0xc00025fd48 sp=0xc00025fcf8 pc=0x718aee 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr golang.org/x/net/http2.(Framer).ReadFrame(0xc00027e0e0) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/golang.org/x/net@v0.10.0/http2/frame.go:498 +0x95 fp=0xc00025fdf8 sp=0xc00025fd48 pc=0x719335 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc/internal/transport.(http2Server).HandleStreams(0xc000256820, 0x0?, 0x0?) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/internal/transport/http2_server.go:642 +0x167 fp=0xc00025ff10 sp=0xc00025fdf8 pc=0x773d47 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc.(Server).serveStreams(0xc0001da3c0, {0x9d89b8?, 0xc000256820}) 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/server.go:946 +0x162 fp=0xc00025ff80 sp=0xc00025ff10 pc=0x7e4502 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr google.golang.org/grpc.(Server).handleRawConn.func1() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/server.go:889 +0x46 fp=0xc00025ffe0 sp=0xc00025ff80 pc=0x7e3da6 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr runtime.goexit() 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00025ffe8 sp=0xc00025ffe0 pc=0x476e81 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr created by google.golang.org/grpc.(*Server).handleRawConn 2023-07-23 12:29:28 10:29AM DBG GRPC(bert-embeddings-bert-MiniLM-L6-v2q4_0.bin-localhost:38181): stderr /go/pkg/mod/google.golang.org/grpc@v1.56.2/server.go:888 +0x185

eyalsofer commented 1 year ago

@finger42 - what about the helm chart? im working with k8sgpt and localai as the ai backend. when k8sgpt is up - imstarting to see those connection refused in the local-ai pod logs. However, im able to curl to local ai from a curl pod. it works for me before with localAi and k8sgpt - with the same configuration.

galfrylich commented 1 year ago

im getting DBG no galleries to load , what can be the solution for this?

luoweb commented 1 year ago

@luoweb - can you share what was configured incorrectly? im seeing this err too: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp [::1]:43883: connect: connection refused" can i just ignore it? i'm using the default configuration

I start successful without gpu binary, as it may failed repeat, the model configuration see the repo models-gallery: https://github.com/go-skynet/model-gallery/blob/main/openllama_7b.yaml

AlexsJones commented 1 year ago

Not sure if this is useful but this is what I see:

F
    * could not load model: rpc error: code = Unknown desc = stat /models/ggml-gpt4all-j: no such file or directory
    * could not load model: rpc error: code = Unknown desc = stat /models/ggml-gpt4all-j: no such file or directory
    * could not load model: rpc error: code = Unknown desc = unsupported model type /models/ggml-gpt4all-j (should end with .onnx)
    * backend unsupported: /build/extra/grpc/huggingface/huggingface.py
chris-hatton commented 11 months ago

Same error; using dual Xeon's, I tried REBUILD=true but that failed (maybe telling). I'm using Intel(R) Xeon(R) CPU X5570, which is not modern but I would still expect it to basically work? I tried many different models, but always get the same rpc error on calling completion. I guess the RPC error is not specific and it just means the internal endpoint didn't start for some other reason. Happy to try steps for debugging if someone can guide me.

artshade commented 11 months ago

Can you try to add the following line to your docker-compose ?


Thank you for the debug request.

The following is the logs with two requests "How are you?" identical to the general example:


cURL request ``` curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "ggml-gpt4all-j", "messages": [{"role": "user", "content": "How are you?"}], "temperature": 0.9 }' ```

Docker container logs ``` ocalai-api-1 | @@@@@ localai-api-1 | Skipping rebuild localai-api-1 | @@@@@ localai-api-1 | If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true localai-api-1 | If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed: localai-api-1 | CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF" localai-api-1 | see the documentation at: https://localai.io/basics/build/index.html localai-api-1 | Note: See also https://github.com/go-skynet/LocalAI/issues/288 localai-api-1 | @@@@@ localai-api-1 | CPU info: localai-api-1 | model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz localai-api-1 | flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt lahf_lm epb pti > localai-api-1 | CPU: no AVX found localai-api-1 | CPU: no AVX2 found localai-api-1 | CPU: no AVX512 found localai-api-1 | @@@@@ localai-api-1 | ^[[90m5:15PM^[[0m ^[[33mDBG^[[0m no galleries to load localai-api-1 | ^[[90m5:15PM^[[0m ^[[32mINF^[[0m Starting LocalAI using 4 threads, with models path: /models localai-api-1 | ^[[90m5:15PM^[[0m ^[[32mINF^[[0m LocalAI version: v1.23.2 (acd829a7a0e1623c0871c8b34c36c76afd4feac8) localai-api-1 | ^[[90m5:15PM^[[0m ^[[33mDBG^[[0m Extracting backend assets files to /tmp/localai/backend_data localai-api-1 | ^[[90m5:15PM^[[0m ^[[33mDBG^[[0m Checking "ggml-gpt4all-j.bin" exists and matches SHA localai-api-1 | ^[[90m5:15PM^[[0m ^[[33mDBG^[[0m Downloading "https://gpt4all.io/models/ggml-gpt4all-j.bin" localai-api-1 | ^[[90m5:15PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 144.2 MiB/3.5 GiB (3.99%) ETA: 2m0.186478834s localai-api-1 | ^[[90m5:15PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 169.3 MiB/3.5 GiB (4.69%) ETA: 3m23.47555042s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 414.4 MiB/3.5 GiB (11.48%) ETA: 1m55.863201907s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 667.4 MiB/3.5 GiB (18.49%) ETA: 1m28.303688333s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 829.9 MiB/3.5 GiB (22.99%) ETA: 1m23.915942132s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 952.2 MiB/3.5 GiB (26.38%) ETA: 1m23.874444451s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 1.1 GiB/3.5 GiB (31.47%) ETA: 1m18.892527076s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 1.2 GiB/3.5 GiB (34.49%) ETA: 1m18.392009675s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 1.4 GiB/3.5 GiB (40.48%) ETA: 1m8.139160672s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 1.6 GiB/3.5 GiB (45.87%) ETA: 1m2.918203841s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 1.7 GiB/3.5 GiB (49.51%) ETA: 59.492000391s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 1.9 GiB/3.5 GiB (52.99%) ETA: 56.178282337s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 2.0 GiB/3.5 GiB (56.76%) ETA: 52.060927676s localai-api-1 | ^[[90m5:16PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 2.2 GiB/3.5 GiB (62.47%) ETA: 44.499997573s localai-api-1 | ^[[90m5:17PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 2.3 GiB/3.5 GiB (65.77%) ETA: 42.118985801s localai-api-1 | ^[[90m5:17PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 2.4 GiB/3.5 GiB (68.64%) ETA: 39.312858052s localai-api-1 | ^[[90m5:17PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 2.4 GiB/3.5 GiB (68.80%) ETA: 41.311706285s localai-api-1 | ^[[90m5:17PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 2.5 GiB/3.5 GiB (70.09%) ETA: 41.035414082s localai-api-1 | ^[[90m5:17PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 2.7 GiB/3.5 GiB (75.35%) ETA: 33.111424287s localai-api-1 | ^[[90m5:17PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 2.8 GiB/3.5 GiB (78.01%) ETA: 30.249998799s localai-api-1 | ^[[90m5:17PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 3.0 GiB/3.5 GiB (84.30%) ETA: 20.90733588s localai-api-1 | ^[[90m5:17PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 3.1 GiB/3.5 GiB (87.10%) ETA: 17.373163104s localai-api-1 | ^[[90m5:17PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 3.2 GiB/3.5 GiB (91.74%) ETA: 11.011214499s localai-api-1 | ^[[90m5:18PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 3.3 GiB/3.5 GiB (94.36%) ETA: 9.123218489s localai-api-1 | ^[[90m5:18PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 3.4 GiB/3.5 GiB (97.21%) ETA: 5.010279179s localai-api-1 | ^[[90m5:18PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 3.4 GiB/3.5 GiB (97.25%) ETA: 5.093007002s localai-api-1 | ^[[90m5:18PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 3.4 GiB/3.5 GiB (97.55%) ETA: 4.650832588s localai-api-1 | ^[[90m5:18PM^[[0m ^[[33mDBG^[[0m Downloading ggml-gpt4all-j.bin: 3.5 GiB/3.5 GiB (99.88%) ETA: 222.865977ms localai-api-1 | ^[[90m5:18PM^[[0m ^[[33mDBG^[[0m File "ggml-gpt4all-j.bin" downloaded and verified localai-api-1 | ^[[90m5:18PM^[[0m ^[[33mDBG^[[0m Prompt template "gpt4all-completion" written localai-api-1 | ^[[90m5:18PM^[[0m ^[[33mDBG^[[0m Prompt template "gpt4all-chat" written localai-api-1 | ^[[90m5:18PM^[[0m ^[[33mDBG^[[0m Written config file /models/gpt4all-j.yaml localai-api-1 | localai-api-1 | ┌───────────────────────────────────────────────────┐ localai-api-1 | │ Fiber v2.48.0 │ localai-api-1 | │ http://127.0.0.1:8080 │ localai-api-1 | │ (bound on host 0.0.0.0 and port 8080) │ localai-api-1 | │ │ localai-api-1 | │ Handlers ............ 32 Processes ........... 1 │ localai-api-1 | │ Prefork ....... Disabled PID ................ 14 │ localai-api-1 | └───────────────────────────────────────────────────┘ localai-api-1 | localai-api-1 | [127.0.0.1]:42190 200 - GET /readyz localai-api-1 | [127.0.0.1]:52260 200 - GET /readyz localai-api-1 | [127.0.0.1]:43846 200 - GET /readyz localai-api-1 | [127.0.0.1]:38506 200 - GET /readyz localai-api-1 | [127.0.0.1]:35302 200 - GET /readyz localai-api-1 | [127.0.0.1]:60658 200 - GET /readyz localai-api-1 | [127.0.0.1]:39776 200 - GET /readyz localai-api-1 | [127.0.0.1]:39050 200 - GET /readyz localai-api-1 | [127.0.0.1]:55542 200 - GET /readyz localai-api-1 | [127.0.0.1]:53072 200 - GET /readyz localai-api-1 | [127.0.0.1]:39124 200 - GET /readyz localai-api-1 | [127.0.0.1]:55328 200 - GET /readyz localai-api-1 | [127.0.0.1]:43226 200 - GET /readyz localai-api-1 | [127.0.0.1]:39968 200 - GET /readyz localai-api-1 | [127.0.0.1]:44014 200 - GET /readyz localai-api-1 | [127.0.0.1]:33824 200 - GET /readyz localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Request received: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Configuration read: &{PredictionOptions:{Model:ggml-gpt4all-j Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:512 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 Neg> localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Parameters: &{PredictionOptions:{Model:ggml-gpt4all-j Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:512 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePro> localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Prompt (before templating): How are you? localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Template found, input modified to: The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response. localai-api-1 | ### Prompt: localai-api-1 | How are you? localai-api-1 | ### Response: localai-api-1 | localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Prompt (after templating): The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response. localai-api-1 | ### Prompt: localai-api-1 | How are you? localai-api-1 | ### Response: localai-api-1 | localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model 'ggml-gpt4all-j' greedly from all the available backends: llama, gpt4all, falcon, gptneox, bert-embeddings, falcon-ggml, gptj, gpt2, dolly, mpt, replit, starcoder, bloomz, rwkv, whisper, stablediffusion, piper, /build/extra/grpc/huggingface/huggingface.py localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m [llama] Attempting to load localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model llama from ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model in memory from file: /models/ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Model llama: {backendString:llama modelFile:ggml-gpt4all-j threads:4 assetDir:/tmp/localai/backend_data context:0xc00003c098 gRPCOptions:0xc0006141e0 externalBackends:map[huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py]} localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service for ggml-gpt4all-j will be running at: '127.0.0.1:33331' localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service state dir: /tmp/go-processmanager202245157 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service Started localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33331: connect: connection refused" localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr 2023/08/10 17:34:35 gRPC Server listening at 127.0.0.1:33331 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service Ready localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:/models/ggml-gpt4all-j ContextSize:512 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 Main> localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr SIGILL: illegal instruction localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr PC=0x838907 m=7 sigcode=2 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr signal arrived during cgo execution localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr instruction bytes: 0xc5 0xfa 0x11 0x84 0x24 0x80 0x0 0x0 0x0 0xc5 0xfa 0x11 0x4c 0x24 0x78 0xc5 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr goroutine 50 [syscall]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.cgocall(0x8167a0, 0xc00009d708) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/cgocall.go:157 +0x5c fp=0xc00009d6e0 sp=0xc00009d6a8 pc=0x42215c localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr github.com/go-skynet/go-llama%2ecpp._Cfunc_load_model(0x1a5b1b0, 0x200, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x200, ...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr _cgo_gotypes.go:250 +0x4d fp=0xc00009d708 sp=0xc00009d6e0 pc=0x80efed localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr github.com/go-skynet/go-llama%2ecpp.New({0xc000288048, 0x16}, {0xc000280200, 0x7, 0x8f13c0?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /build/go-llama/llama.go:28 +0x297 fp=0xc00009d878 sp=0xc00009d708 pc=0x80f7f7 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr github.com/go-skynet/LocalAI/pkg/grpc/llm/llama.(*LLM).Load(0xc0000142a0, 0xc0002981e0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /build/pkg/grpc/llm/llama/llama.go:74 +0x8e5 fp=0xc00009d9a8 sp=0xc00009d878 pc=0x813345 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr github.com/go-skynet/LocalAI/pkg/grpc.(*server).LoadModel(0x9715a0?, {0xc0002981e0?, 0x5d9586?}, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /build/pkg/grpc/server.go:42 +0x28 fp=0xc00009da10 sp=0xc00009d9a8 pc=0x8156a8 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr github.com/go-skynet/LocalAI/pkg/grpc/proto._Backend_LoadModel_Handler({0x94f360?, 0xc00006dd20}, {0xa330f0, 0xc000292360}, 0xc000284150, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /build/pkg/grpc/proto/backend_grpc.pb.go:236 +0x170 fp=0xc00009da68 sp=0xc00009da10 pc=0x80c350 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001961e0, {0xa35d78, 0xc000102340}, 0xc0002aa000, 0xc00019ea50, 0xcfb910, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1360 +0xe23 fp=0xc00009de48 sp=0xc00009da68 pc=0x7f4f43 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc.(*Server).handleStream(0xc0001961e0, {0xa35d78, 0xc000102340}, 0xc0002aa000, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1737 +0xa36 fp=0xc00009df68 sp=0xc00009de48 pc=0x7fa096 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc.(*Server).serveStreams.func1.1() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:982 +0x98 fp=0xc00009dfe0 sp=0xc00009df68 pc=0x7f2918 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009dfe8 sp=0xc00009dfe0 pc=0x484c21 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr created by google.golang.org/grpc.(*Server).serveStreams.func1 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:980 +0x18c localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr goroutine 1 [IO wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc00018fb68 sp=0xc00018fb48 pc=0x455d96 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.netpollblock(0x7f992832d5f8?, 0x4217ef?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/netpoll.go:527 +0xf7 fp=0xc00018fba0 sp=0xc00018fb68 pc=0x44e6d7 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr internal/poll.runtime_pollWait(0x7f9928322ef8, 0x72) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/netpoll.go:306 +0x89 fp=0xc00018fbc0 sp=0xc00018fba0 pc=0x47f7c9 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr internal/poll.(*pollDesc).wait(0xc0000e8280?, 0x4?, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc00018fbe8 sp=0xc00018fbc0 pc=0x4edad2 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr internal/poll.(*pollDesc).waitRead(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr internal/poll.(*FD).Accept(0xc0000e8280) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/internal/poll/fd_unix.go:614 +0x2bd fp=0xc00018fc90 sp=0xc00018fbe8 pc=0x4f33dd localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr net.(*netFD).accept(0xc0000e8280) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/net/fd_unix.go:172 +0x35 fp=0xc00018fd48 sp=0xc00018fc90 pc=0x6048d5 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr net.(*TCPListener).accept(0xc000012618) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/net/tcpsock_posix.go:148 +0x25 fp=0xc00018fd70 sp=0xc00018fd48 pc=0x61d145 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr net.(*TCPListener).Accept(0xc000012618) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/net/tcpsock.go:297 +0x3d fp=0xc00018fda0 sp=0xc00018fd70 pc=0x61c23d localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc.(*Server).Serve(0xc0001961e0, {0xa32980?, 0xc000012618}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:844 +0x475 fp=0xc00018fee8 sp=0xc00018fda0 pc=0x7f1535 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr github.com/go-skynet/LocalAI/pkg/grpc.StartServer({0x7ffdb1eb6dc5?, 0xc000024190?}, {0xa352f0?, 0xc0000142a0}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /build/pkg/grpc/server.go:121 +0x125 fp=0xc00018ff50 sp=0xc00018fee8 pc=0x8162a5 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr main.main() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /build/cmd/grpc/llama/main.go:22 +0x85 fp=0xc00018ff80 sp=0xc00018ff50 pc=0x816405 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.main() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:250 +0x207 fp=0xc00018ffe0 sp=0xc00018ff80 pc=0x455967 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00018ffe8 sp=0xc00018ffe0 pc=0x484c21 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr goroutine 2 [force gc (idle)]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000056fb0 sp=0xc000056f90 pc=0x455d96 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goparkunlock(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:387 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.forcegchelper() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:305 +0xb0 fp=0xc000056fe0 sp=0xc000056fb0 pc=0x455bd0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000056fe8 sp=0xc000056fe0 pc=0x484c21 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr created by runtime.init.6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:293 +0x25 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr goroutine 3 [GC sweep wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000057780 sp=0xc000057760 pc=0x455d96 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goparkunlock(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:387 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.bgsweep(0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/mgcsweep.go:278 +0x8e fp=0xc0000577c8 sp=0xc000057780 pc=0x441f8e localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gcenable.func1() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/mgc.go:178 +0x26 fp=0xc0000577e0 sp=0xc0000577c8 pc=0x437246 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000577e8 sp=0xc0000577e0 pc=0x484c21 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr created by runtime.gcenable localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/mgc.go:178 +0x6b localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr goroutine 4 [GC scavenge wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gopark(0xc000032070?, 0xa2bb68?, 0x1?, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000057f70 sp=0xc000057f50 pc=0x455d96 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goparkunlock(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:387 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.(*scavengerState).park(0xd47a40) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/mgcscavenge.go:400 +0x53 fp=0xc000057fa0 sp=0xc000057f70 pc=0x43feb3 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.bgscavenge(0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/mgcscavenge.go:628 +0x45 fp=0xc000057fc8 sp=0xc000057fa0 pc=0x440485 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gcenable.func2() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/mgc.go:179 +0x26 fp=0xc000057fe0 sp=0xc000057fc8 pc=0x4371e6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000057fe8 sp=0xc000057fe0 pc=0x484c21 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr created by runtime.gcenable localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/mgc.go:179 +0xaa localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr goroutine 5 [finalizer wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gopark(0x1a0?, 0xd47f60?, 0x60?, 0x78?, 0xc000056770?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000056628 sp=0xc000056608 pc=0x455d96 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.runfinq() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc0000567e0 sp=0xc000056628 pc=0x436287 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000567e8 sp=0xc0000567e0 pc=0x484c21 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr created by runtime.createfing localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/mfinal.go:163 +0x45 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr goroutine 34 [select]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gopark(0xc0002a3f00?, 0x2?, 0x83?, 0x12?, 0xc0002a3ed4?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc0002a3d60 sp=0xc0002a3d40 pc=0x455d96 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.selectgo(0xc0002a3f00, 0xc0002a3ed0, 0x627669?, 0x0, 0xc000136000?, 0x1) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/select.go:327 +0x7be fp=0xc0002a3ea0 sp=0xc0002a3d60 pc=0x46597e localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc000146050, 0x1) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/controlbuf.go:418 +0x115 fp=0xc0002a3f30 sp=0xc0002a3ea0 pc=0x7666f5 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc000284070) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/controlbuf.go:552 +0x91 fp=0xc0002a3f90 sp=0xc0002a3f30 pc=0x766e71 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func2() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:341 +0xda fp=0xc0002a3fe0 sp=0xc0002a3f90 pc=0x77e85a localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0002a3fe8 sp=0xc0002a3fe0 pc=0x484c21 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:338 +0x1bb3 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr goroutine 35 [select]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gopark(0xc000058770?, 0x4?, 0x10?, 0x0?, 0xc0000586c0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000058508 sp=0xc0000584e8 pc=0x455d96 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.selectgo(0xc000058770, 0xc0000586b8, 0xc0001d2000?, 0x0, 0xc000058690?, 0x1) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/select.go:327 +0x7be fp=0xc000058648 sp=0xc000058508 pc=0x46597e localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc/internal/transport.(*http2Server).keepalive(0xc000102340) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:1155 +0x233 fp=0xc0000587c8 sp=0xc000058648 pc=0x785f33 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func4() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:344 +0x26 fp=0xc0000587e0 sp=0xc0000587c8 pc=0x77e746 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000587e8 sp=0xc0000587e0 pc=0x484c21 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:344 +0x1bf8 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr goroutine 36 [IO wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.gopark(0x445120?, 0xb?, 0x0?, 0x0?, 0x6?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000128aa0 sp=0xc000128a80 pc=0x455d96 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.netpollblock(0x4d2f05?, 0x4217ef?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/netpoll.go:527 +0xf7 fp=0xc000128ad8 sp=0xc000128aa0 pc=0x44e6d7 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr internal/poll.runtime_pollWait(0x7f9928322e08, 0x72) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/netpoll.go:306 +0x89 fp=0xc000128af8 sp=0xc000128ad8 pc=0x47f7c9 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr internal/poll.(*pollDesc).wait(0xc000272280?, 0xc00012e000?, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc000128b20 sp=0xc000128af8 pc=0x4edad2 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr internal/poll.(*pollDesc).waitRead(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr internal/poll.(*FD).Read(0xc000272280, {0xc00012e000, 0x8000, 0x8000}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/internal/poll/fd_unix.go:167 +0x299 fp=0xc000128bb8 sp=0xc000128b20 pc=0x4eeeb9 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr net.(*netFD).Read(0xc000272280, {0xc00012e000?, 0x1060100000000?, 0x8?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/net/fd_posix.go:55 +0x29 fp=0xc000128c00 sp=0xc000128bb8 pc=0x602749 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr net.(*conn).Read(0xc000216050, {0xc00012e000?, 0x18?, 0xc000200000?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/net/net.go:183 +0x45 fp=0xc000128c48 sp=0xc000128c00 pc=0x614285 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr net.(*TCPConn).Read(0x800010601?, {0xc00012e000?, 0x0?, 0xc000128ca8?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr :1 +0x29 fp=0xc000128c78 sp=0xc000128c48 pc=0x627369 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr bufio.(*Reader).Read(0xc0001200c0, {0xc000148040, 0x9, 0x0?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/bufio/bufio.go:237 +0x1bb fp=0xc000128cb0 sp=0xc000128c78 pc=0x57913b localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr io.ReadAtLeast({0xa2f3e0, 0xc0001200c0}, {0xc000148040, 0x9, 0x9}, 0x9) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/io/io.go:332 +0x9a fp=0xc000128cf8 sp=0xc000128cb0 pc=0x4cce7a localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr io.ReadFull(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/io/io.go:351 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr golang.org/x/net/http2.readFrameHeader({0xc000148040?, 0x9?, 0xc00022e030?}, {0xa2f3e0?, 0xc0001200c0?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/golang.org/x/net@v0.12.0/http2/frame.go:237 +0x6e fp=0xc000128d48 sp=0xc000128cf8 pc=0x751ece localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr golang.org/x/net/http2.(*Framer).ReadFrame(0xc000148000) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/golang.org/x/net@v0.12.0/http2/frame.go:498 +0x95 fp=0xc000128df8 sp=0xc000128d48 pc=0x752715 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams(0xc000102340, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:642 +0x167 fp=0xc000128f10 sp=0xc000128df8 pc=0x781b87 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc.(*Server).serveStreams(0xc0001961e0, {0xa35d78?, 0xc000102340}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:969 +0x162 fp=0xc000128f80 sp=0xc000128f10 pc=0x7f2662 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr google.golang.org/grpc.(*Server).handleRawConn.func1() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:912 +0x46 fp=0xc000128fe0 sp=0xc000128f80 pc=0x7f1f06 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000128fe8 sp=0xc000128fe0 pc=0x484c21 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr created by google.golang.org/grpc.(*Server).handleRawConn localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:911 +0x185 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rax 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rbx 0x1 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rcx 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rdx 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rdi 0x7f991affcbe0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rsi 0x1a5b1b0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rbp 0x7f991affcdc0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rsp 0x7f991affc520 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr r8 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr r9 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr r10 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr r11 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr r12 0x1a5b1b0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr r13 0x7f991affcbe0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr r14 0xc0000e2d00 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr r15 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rip 0x838907 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr rflags 0x10202 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr cs 0x33 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr fs 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:33331): stderr gs 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m [llama] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m [gpt4all] Attempting to load localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model gpt4all from ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model in memory from file: /models/ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Model gpt4all: {backendString:gpt4all modelFile:ggml-gpt4all-j threads:4 assetDir:/tmp/localai/backend_data context:0xc00003c098 gRPCOptions:0xc0006141e0 externalBackends:map[huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py]} localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/gpt4all localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service for ggml-gpt4all-j will be running at: '127.0.0.1:45351' localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service state dir: /tmp/go-processmanager1827090515 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service Started localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:45351: connect: connection refused" localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:45351): stderr 2023/08/10 17:34:36 gRPC Server listening at 127.0.0.1:45351 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service Ready localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:/models/ggml-gpt4all-j ContextSize:512 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 Main> localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:45351): stderr load_model: error 'Model format not supported (no matching implementation found)' localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m [gpt4all] Fails: could not load model: rpc error: code = Unknown desc = failed loading model localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m [falcon] Attempting to load localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model falcon from ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model in memory from file: /models/ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Model falcon: {backendString:falcon modelFile:ggml-gpt4all-j threads:4 assetDir:/tmp/localai/backend_data context:0xc00003c098 gRPCOptions:0xc0006141e0 externalBackends:map[huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py]} localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/falcon localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service for ggml-gpt4all-j will be running at: '127.0.0.1:43753' localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service state dir: /tmp/go-processmanager2145537263 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service Started localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:43753: connect: connection refused" localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service NOT ready localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m [falcon] Fails: grpc service not ready localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m [gptneox] Attempting to load localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model gptneox from ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model in memory from file: /models/ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Model gptneox: {backendString:gptneox modelFile:ggml-gpt4all-j threads:4 assetDir:/tmp/localai/backend_data context:0xc00003c098 gRPCOptions:0xc0006141e0 externalBackends:map[huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py]} localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/gptneox localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service for ggml-gpt4all-j will be running at: '127.0.0.1:34935' localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service state dir: /tmp/go-processmanager2026962865 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service Started localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:34935: connect: connection refused" localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr 2023/08/10 17:34:47 gRPC Server listening at 127.0.0.1:34935 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service Ready localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:/models/ggml-gpt4all-j ContextSize:512 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 Main> localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr GGML_ASSERT: /build/go-ggml-transformers/ggml.cpp/src/ggml.c:4128: wtype != GGML_TYPE_COUNT localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr SIGABRT: abort localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr PC=0x7f9429ac7ce1 m=3 sigcode=18446744073709551610 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr signal arrived during cgo execution localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr goroutine 54 [syscall]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.cgocall(0x80dcc0, 0xc00012d908) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/cgocall.go:157 +0x5c fp=0xc00012d8e0 sp=0xc00012d8a8 pc=0x41ef7c localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr github.com/go-skynet/go-ggml-transformers%2ecpp._Cfunc_gpt_neox_bootstrap(0x7f93f4000de0, 0x7f93f4000ca0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr _cgo_gotypes.go:400 +0x4c fp=0xc00012d908 sp=0xc00012d8e0 pc=0x80b2ec localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr github.com/go-skynet/go-ggml-transformers%2ecpp.NewGPTNeoX.func1(0xc000288078?, 0x16?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /build/go-ggml-transformers/gptneox.go:23 +0x46 fp=0xc00012d948 sp=0xc00012d908 pc=0x80b706 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr github.com/go-skynet/go-ggml-transformers%2ecpp.NewGPTNeoX({0xc000288078, 0x16}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /build/go-ggml-transformers/gptneox.go:23 +0x56 fp=0xc00012d988 sp=0xc00012d948 pc=0x80b636 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr github.com/go-skynet/LocalAI/pkg/grpc/llm/transformers.(*GPTNeoX).Load(0xc000128290, 0x7f9429a4af18?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /build/pkg/grpc/llm/transformers/gptneox.go:21 +0x29 fp=0xc00012d9a8 sp=0xc00012d988 pc=0x80be89 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr github.com/go-skynet/LocalAI/pkg/grpc.(*server).LoadModel(0x98ad00?, {0xc000298320?, 0x5d63a6?}, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /build/pkg/grpc/server.go:42 +0x28 fp=0xc00012da10 sp=0xc00012d9a8 pc=0x80c9a8 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr github.com/go-skynet/LocalAI/pkg/grpc/proto._Backend_LoadModel_Handler({0x9689a0?, 0xc000119cf0}, {0xa4bb10, 0xc000292570}, 0xc0002840e0, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /build/pkg/grpc/proto/backend_grpc.pb.go:236 +0x170 fp=0xc00012da68 sp=0xc00012da10 pc=0x809170 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc.(*Server).processUnaryRPC(0xc0001cc1e0, {0xa4e798, 0xc0002c6000}, 0xc0002ce000, 0xc0001d49f0, 0xd0f770, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1360 +0xe23 fp=0xc00012de48 sp=0xc00012da68 pc=0x7f1d63 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc.(*Server).handleStream(0xc0001cc1e0, {0xa4e798, 0xc0002c6000}, 0xc0002ce000, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:1737 +0xa36 fp=0xc00012df68 sp=0xc00012de48 pc=0x7f6eb6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc.(*Server).serveStreams.func1.1() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:982 +0x98 fp=0xc00012dfe0 sp=0xc00012df68 pc=0x7ef738 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00012dfe8 sp=0xc00012dfe0 pc=0x481a41 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr created by google.golang.org/grpc.(*Server).serveStreams.func1 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:980 +0x18c localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr goroutine 1 [IO wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc0001b3b68 sp=0xc0001b3b48 pc=0x452bb6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.netpollblock(0xc0001b3bf8?, 0x41e60f?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/netpoll.go:527 +0xf7 fp=0xc0001b3ba0 sp=0xc0001b3b68 pc=0x44b4f7 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr internal/poll.runtime_pollWait(0x7f9400ecaeb8, 0x72) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/netpoll.go:306 +0x89 fp=0xc0001b3bc0 sp=0xc0001b3ba0 pc=0x47c5e9 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr internal/poll.(*pollDesc).wait(0xc000190280?, 0x0?, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc0001b3be8 sp=0xc0001b3bc0 pc=0x4ea8f2 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr internal/poll.(*pollDesc).waitRead(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr internal/poll.(*FD).Accept(0xc000190280) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/internal/poll/fd_unix.go:614 +0x2bd fp=0xc0001b3c90 sp=0xc0001b3be8 pc=0x4f01fd localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr net.(*netFD).accept(0xc000190280) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/net/fd_unix.go:172 +0x35 fp=0xc0001b3d48 sp=0xc0001b3c90 pc=0x6016f5 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr net.(*TCPListener).accept(0xc000124600) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/net/tcpsock_posix.go:148 +0x25 fp=0xc0001b3d70 sp=0xc0001b3d48 pc=0x619f65 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr net.(*TCPListener).Accept(0xc000124600) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/net/tcpsock.go:297 +0x3d fp=0xc0001b3da0 sp=0xc0001b3d70 pc=0x61905d localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc.(*Server).Serve(0xc0001cc1e0, {0xa4b3a0?, 0xc000124600}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:844 +0x475 fp=0xc0001b3ee8 sp=0xc0001b3da0 pc=0x7ee355 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr github.com/go-skynet/LocalAI/pkg/grpc.StartServer({0x7ffeb7d29dc3?, 0xc00011a160?}, {0xa4dd10?, 0xc000128290}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /build/pkg/grpc/server.go:121 +0x125 fp=0xc0001b3f50 sp=0xc0001b3ee8 pc=0x80d5a5 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr main.main() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /build/cmd/grpc/gptneox/main.go:20 +0x85 fp=0xc0001b3f80 sp=0xc0001b3f50 pc=0x80d705 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.main() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:250 +0x207 fp=0xc0001b3fe0 sp=0xc0001b3f80 pc=0x452787 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0001b3fe8 sp=0xc0001b3fe0 pc=0x481a41 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr goroutine 2 [force gc (idle)]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000056fb0 sp=0xc000056f90 pc=0x452bb6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goparkunlock(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:387 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.forcegchelper() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:305 +0xb0 fp=0xc000056fe0 sp=0xc000056fb0 pc=0x4529f0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000056fe8 sp=0xc000056fe0 pc=0x481a41 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr created by runtime.init.6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:293 +0x25 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr goroutine 3 [GC sweep wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000057780 sp=0xc000057760 pc=0x452bb6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goparkunlock(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:387 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.bgsweep(0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/mgcsweep.go:278 +0x8e fp=0xc0000577c8 sp=0xc000057780 pc=0x43edae localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gcenable.func1() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/mgc.go:178 +0x26 fp=0xc0000577e0 sp=0xc0000577c8 pc=0x434066 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000577e8 sp=0xc0000577e0 pc=0x481a41 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr created by runtime.gcenable localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/mgc.go:178 +0x6b localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr goroutine 4 [GC scavenge wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gopark(0xc000032070?, 0xa445b8?, 0x1?, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000057f70 sp=0xc000057f50 pc=0x452bb6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goparkunlock(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:387 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.(*scavengerState).park(0xd5b7e0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/mgcscavenge.go:400 +0x53 fp=0xc000057fa0 sp=0xc000057f70 pc=0x43ccd3 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.bgscavenge(0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/mgcscavenge.go:628 +0x45 fp=0xc000057fc8 sp=0xc000057fa0 pc=0x43d2a5 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gcenable.func2() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/mgc.go:179 +0x26 fp=0xc000057fe0 sp=0xc000057fc8 pc=0x434006 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000057fe8 sp=0xc000057fe0 pc=0x481a41 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr created by runtime.gcenable localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/mgc.go:179 +0xaa localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr goroutine 18 [finalizer wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gopark(0x1a0?, 0xd5bd00?, 0xe0?, 0x24?, 0xc000056770?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000056628 sp=0xc000056608 pc=0x452bb6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.runfinq() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/mfinal.go:193 +0x107 fp=0xc0000567e0 sp=0xc000056628 pc=0x4330a7 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0000567e8 sp=0xc0000567e0 pc=0x481a41 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr created by runtime.createfing localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/mfinal.go:163 +0x45 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr goroutine 51 [select]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gopark(0xc00009ff00?, 0x2?, 0xa3?, 0xe0?, 0xc00009fed4?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc00009fd60 sp=0xc00009fd40 pc=0x452bb6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.selectgo(0xc00009ff00, 0xc00009fed0, 0x624489?, 0x0, 0xc0002b0000?, 0x1) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/select.go:327 +0x7be fp=0xc00009fea0 sp=0xc00009fd60 pc=0x46279e localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0002960a0, 0x1) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/controlbuf.go:418 +0x115 fp=0xc00009ff30 sp=0xc00009fea0 pc=0x763515 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00008e070) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/controlbuf.go:552 +0x91 fp=0xc00009ff90 sp=0xc00009ff30 pc=0x763c91 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func2() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:341 +0xda fp=0xc00009ffe0 sp=0xc00009ff90 pc=0x77b67a localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc00009ffe8 sp=0xc00009ffe0 pc=0x481a41 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:338 +0x1bb3 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr goroutine 52 [select]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gopark(0xc0002a2f70?, 0x4?, 0x10?, 0x0?, 0xc0002a2ec0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc0002a2d08 sp=0xc0002a2ce8 pc=0x452bb6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.selectgo(0xc0002a2f70, 0xc0002a2eb8, 0x0?, 0x0, 0x0?, 0x1) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/select.go:327 +0x7be fp=0xc0002a2e48 sp=0xc0002a2d08 pc=0x46279e localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc/internal/transport.(*http2Server).keepalive(0xc0002c6000) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:1155 +0x233 fp=0xc0002a2fc8 sp=0xc0002a2e48 pc=0x782d53 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc/internal/transport.NewServerTransport.func4() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:344 +0x26 fp=0xc0002a2fe0 sp=0xc0002a2fc8 pc=0x77b566 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc0002a2fe8 sp=0xc0002a2fe0 pc=0x481a41 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr created by google.golang.org/grpc/internal/transport.NewServerTransport localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:344 +0x1bf8 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr goroutine 53 [IO wait]: localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.gopark(0x441f40?, 0xb?, 0x0?, 0x0?, 0x6?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/proc.go:381 +0xd6 fp=0xc000210aa0 sp=0xc000210a80 pc=0x452bb6 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.netpollblock(0x4cfd25?, 0x41e60f?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/netpoll.go:527 +0xf7 fp=0xc000210ad8 sp=0xc000210aa0 pc=0x44b4f7 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr internal/poll.runtime_pollWait(0x7f9400ecadc8, 0x72) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/netpoll.go:306 +0x89 fp=0xc000210af8 sp=0xc000210ad8 pc=0x47c5e9 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr internal/poll.(*pollDesc).wait(0xc00029a000?, 0xc0002a8000?, 0x0) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32 fp=0xc000210b20 sp=0xc000210af8 pc=0x4ea8f2 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr internal/poll.(*pollDesc).waitRead(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr internal/poll.(*FD).Read(0xc00029a000, {0xc0002a8000, 0x8000, 0x8000}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/internal/poll/fd_unix.go:167 +0x299 fp=0xc000210bb8 sp=0xc000210b20 pc=0x4ebcd9 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr net.(*netFD).Read(0xc00029a000, {0xc0002a8000?, 0x1060100000000?, 0x8?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/net/fd_posix.go:55 +0x29 fp=0xc000210c00 sp=0xc000210bb8 pc=0x5ff569 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr net.(*conn).Read(0xc00029c000, {0xc0002a8000?, 0x18?, 0xc00005a800?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/net/net.go:183 +0x45 fp=0xc000210c48 sp=0xc000210c00 pc=0x6110a5 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr net.(*TCPConn).Read(0x800010601?, {0xc0002a8000?, 0x0?, 0xc000210ca8?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr :1 +0x29 fp=0xc000210c78 sp=0xc000210c48 pc=0x624189 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr bufio.(*Reader).Read(0xc0002a6000, {0xc0002c0040, 0x9, 0x0?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/bufio/bufio.go:237 +0x1bb fp=0xc000210cb0 sp=0xc000210c78 pc=0x575f5b localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr io.ReadAtLeast({0xa47e00, 0xc0002a6000}, {0xc0002c0040, 0x9, 0x9}, 0x9) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/io/io.go:332 +0x9a fp=0xc000210cf8 sp=0xc000210cb0 pc=0x4c9c9a localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr io.ReadFull(...) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/io/io.go:351 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr golang.org/x/net/http2.readFrameHeader({0xc0002c0040?, 0x9?, 0xc000232030?}, {0xa47e00?, 0xc0002a6000?}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/golang.org/x/net@v0.12.0/http2/frame.go:237 +0x6e fp=0xc000210d48 sp=0xc000210cf8 pc=0x74ecee localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr golang.org/x/net/http2.(*Framer).ReadFrame(0xc0002c0000) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/golang.org/x/net@v0.12.0/http2/frame.go:498 +0x95 fp=0xc000210df8 sp=0xc000210d48 pc=0x74f535 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc/internal/transport.(*http2Server).HandleStreams(0xc0002c6000, 0x0?, 0x0?) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/internal/transport/http2_server.go:642 +0x167 fp=0xc000210f10 sp=0xc000210df8 pc=0x77e9a7 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc.(*Server).serveStreams(0xc0001cc1e0, {0xa4e798?, 0xc0002c6000}) localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:969 +0x162 fp=0xc000210f80 sp=0xc000210f10 pc=0x7ef482 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr google.golang.org/grpc.(*Server).handleRawConn.func1() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:912 +0x46 fp=0xc000210fe0 sp=0xc000210f80 pc=0x7eed26 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr runtime.goexit() localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /usr/local/go/src/runtime/asm_amd64.s:1598 +0x1 fp=0xc000210fe8 sp=0xc000210fe0 pc=0x481a41 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr created by google.golang.org/grpc.(*Server).handleRawConn localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr /go/pkg/mod/google.golang.org/grpc@v1.57.0/server.go:911 +0x185 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rax 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rbx 0x7f9401f03700 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rcx 0x7f9429ac7ce1 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rdx 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rdi 0x2 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rsi 0x7f9401f027c0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rbp 0x7f9401f02d80 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rsp 0x7f9401f027c0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr r8 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr r9 0x7f9401f027c0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr r10 0x8 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr r11 0x246 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr r12 0x7f93f45703d0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr r13 0x12 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr r14 0x7f9401f02ae0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr r15 0x7f93f4000d18 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rip 0x7f9429ac7ce1 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr rflags 0x246 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr cs 0x33 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr fs 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:34935): stderr gs 0x0 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m [gptneox] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m [bert-embeddings] Attempting to load localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model bert-embeddings from ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading model in memory from file: /models/ggml-gpt4all-j localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Model bert-embeddings: {backendString:bert-embeddings modelFile:ggml-gpt4all-j threads:4 assetDir:/tmp/localai/backend_data context:0xc00003c098 gRPCOptions:0xc0006141e0 externalBackends:map[huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py]} localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/bert-embeddings localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service for ggml-gpt4all-j will be running at: '127.0.0.1:36323' localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service state dir: /tmp/go-processmanager395863541 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service Started localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:36323: connect: connection refused" localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC(ggml-gpt4all-j-127.0.0.1:36323): stderr 2023/08/10 17:34:48 gRPC Server listening at 127.0.0.1:36323 localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC Service Ready localai-api-1 | ^[[90m5:34PM^[[0m ^[[33mDBG^[[0m GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:/models/ggml-gpt4all-j ContextSize:512 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 Main> localai-api-1 | [127.0.0.1]:41312 200 - GET /readyz localai-api-1 | [127.0.0.1]:50028 200 - GET /readyz localai-api-1 | [127.0.0.1]:35378 200 - GET /readyz localai-api-1 | [127.0.0.1]:53340 200 - GET /readyz localai-api-1 | [127.0.0.1]:51348 200 - GET /readyz localai-api-1 | [127.0.0.1]:50808 200 - GET /readyz localai-api-1 | ^[[90m5:41PM^[[0m ^[[33mDBG^[[0m Request received: localai-api-1 | ^[[90m5:41PM^[[0m ^[[33mDBG^[[0m Configuration read: &{PredictionOptions:{Model:ggml-gpt4all-j Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:512 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 Neg> localai-api-1 | ^[[90m5:41PM^[[0m ^[[33mDBG^[[0m Parameters: &{PredictionOptions:{Model:ggml-gpt4all-j Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:512 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePro> localai-api-1 | ^[[90m5:41PM^[[0m ^[[33mDBG^[[0m Prompt (before templating): How are you? localai-api-1 | [127.0.0.1]:37216 200 - GET /readyz localai-api-1 | [127.0.0.1]:51460 200 - GET /readyz localai-api-1 | [127.0.0.1]:42756 200 - GET /readyz localai-api-1 | [127.0.0.1]:50734 200 - GET /readyz localai-api-1 | [127.0.0.1]:35250 200 - GET /readyz localai-api-1 | [127.0.0.1]:47248 200 - GET /readyz localai-api-1 | [127.0.0.1]:48750 200 - GET /readyz localai-api-1 | [127.0.0.1]:55402 200 - GET /readyz localai-api-1 | [127.0.0.1]:52008 200 - GET /readyz localai-api-1 | [127.0.0.1]:44972 200 - GET /readyz localai-api-1 | [127.0.0.1]:48204 200 - GET /readyz localai-api-1 | [127.0.0.1]:37994 200 - GET /readyz localai-api-1 | [127.0.0.1]:58512 200 - GET /readyz ```
p-v-a commented 11 months ago

Same error; using dual Xeon's, I tried REBUILD=true but that failed (maybe telling). I'm using Intel(R) Xeon(R) CPU X5570, which is not modern but I would still expect it to basically work? I tried many different models, but always get the same rpc error on calling completion. I guess the RPC error is not specific and it just means the internal endpoint didn't start for some other reason. Happy to try steps for debugging if someone can guide me.

Chris, Can you try following https://github.com/go-skynet/LocalAI/issues/288#issuecomment-1666811444. I think REBUILD=true is failing because it doesn't have all required flags.

artshade commented 11 months ago

Can you try following #288 (comment). I think REBUILD=true is failing because it doesn't have all required flags.

Thank you! I have tried building with these variables and running and got another response, but its still crashes. Well, now, the causer is an assert: GGML_ASSERT: /build/go-ggml-transformers/ggml.cpp/src/ggml.c:4128: wtype != GGML_TYPE_COUNT.

Docker container logs: https://paste.gg/p/anonymous/7a01dde6725748cd8a85beea77fab165


However, in the logs, it states: localai-api-1 | I CMAKE_ARGS:. The file which declares the value to read by make is ./go-ggllm/Makefile.

Meaning, the variable value may not pass to the required environment. The variable initialization was set in .env, and the default Docker compose file was used.

p-v-a commented 11 months ago

LLAMA.CPP backend is definitely look healthy now. it no longer fails on load. To me it looks like corrupted model file.

  | localai-api-1  \| 3:29PM DBG GRPC(ggml-gpt4all-j-127.0.0.1:38783): stderr create_gpt_params: loading model /models/ggml-gpt4all-j
  | localai-api-1  \| 3:29PM DBG GRPC(ggml-gpt4all-j-127.0.0.1:38783): stderr llama.cpp: loading model from /models/ggml-gpt4all-j
  | localai-api-1  \| 3:30PM DBG GRPC(ggml-gpt4all-j-127.0.0.1:38783): stderr error loading model: unexpectedly reached end of file
  | localai-api-1  \| 3:30PM DBG GRPC(ggml-gpt4all-j-127.0.0.1:38783): stderr llama_load_model_from_file: failed to load model
  | localai-api-1  \| 3:30PM DBG GRPC(ggml-gpt4all-j-127.0.0.1:38783): stderr llama_init_from_gpt_params: error: failed to load model '/models/ggml-gpt4all-j'

The error you are referring to comes from another backend, not from llama.cpp. Because local-ai is trying to load model with all available backends, if configured backend failed.

I didn't use REBUILD=true though, I just directly modified Dockerfile to pass correct parameters, and build new container. This way is easier for me, and I need to wait build once, rather then on every start of the container. Here is patch file:

diff --git a/Dockerfile b/Dockerfile
index f67a1f3..0650ff8 100644
--- a/Dockerfile
+++ b/Dockerfile
@@ -13,6 +13,7 @@ ARG TARGETVARIANT
 ENV BUILD_TYPE=${BUILD_TYPE}
 ENV EXTERNAL_GRPC_BACKENDS="huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py"
 ARG GO_TAGS="stablediffusion tts"
+ARG BUILD_PARALLELISM

 RUN apt-get update && \
     apt-get install -y ca-certificates cmake curl patch pip
@@ -80,6 +81,8 @@ ENV GO_TAGS=${GO_TAGS}
 ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
 ENV NVIDIA_REQUIRE_CUDA="cuda>=${CUDA_MAJOR_VERSION}.0"
 ENV NVIDIA_VISIBLE_DEVICES=all
+ENV CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_AVX=OFF -DLLAMA_FMA=OFF -DLLAMA_F16C=OFF"
+ENV BUILD_PARALLELISM=${BUILD_PARALLELISM}

 WORKDIR /build

@@ -90,7 +93,7 @@ RUN make prepare
 COPY . .
 COPY .git .

-RUN ESPEAK_DATA=/build/lib/Linux-$(uname -m)/piper_phonemize/lib/espeak-ng-data make build
+RUN ESPEAK_DATA=/build/lib/Linux-$(uname -m)/piper_phonemize/lib/espeak-ng-data make build -j${BUILD_PARALLELISM:-1}
artshade commented 11 months ago

+ENV CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_AVX=OFF -DLLAMA_FMA=OFF -DLLAMA_F16C=OFF"

Thank you! It seems like there are two DLLAMA_F16C defined. Also, may I ask you share the exact steps you achieve to have it working?

artshade commented 11 months ago

LLAMA.CPP backend is definitely look healthy now... I didn't use REBUILD=true though, I just directly modified Dockerfile to...

I removed everything from ./model/, the Docker container and image, and tried building the image with a modified Dockerfile, yet the model still fails to load after these slightly modified two requests with an error: 'Model format not supported (no matching implementation found)'.

What might it be? I would appreciate for a suggestion.


"Dockerfile" ``` ARG GO_VERSION=1.20-bullseye FROM golang:$GO_VERSION as requirements ARG BUILD_TYPE ARG CUDA_MAJOR_VERSION=11 ARG CUDA_MINOR_VERSION=7 ARG SPDLOG_VERSION="1.11.0" ARG PIPER_PHONEMIZE_VERSION='1.0.0' ARG TARGETARCH ARG TARGETVARIANT ENV BUILD_TYPE=${BUILD_TYPE} ENV EXTERNAL_GRPC_BACKENDS="huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py,autogptq:/build/extra/grpc/autogptq/autogptq.py,bark:/build/extra/grpc/bark/ttsbark.py,diffusers:/build/extra/grpc/diffusers/backend_diffusers.py" ARG GO_TAGS="stablediffusion tts" ARG BUILD_PARALLELISM RUN apt-get update && \ apt-get install -y ca-certificates cmake curl patch pip # Use the variables in subsequent instructions RUN echo "Target Architecture: $TARGETARCH" RUN echo "Target Variant: $TARGETVARIANT" # CuBLAS requirements RUN if [ "${BUILD_TYPE}" = "cublas" ]; then \ apt-get install -y software-properties-common && \ apt-add-repository contrib && \ curl -O https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/cuda-keyring_1.0-1_all.deb && \ dpkg -i cuda-keyring_1.0-1_all.deb && \ rm -f cuda-keyring_1.0-1_all.deb && \ apt-get update && \ apt-get install -y cuda-nvcc-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} libcublas-dev-${CUDA_MAJOR_VERSION}-${CUDA_MINOR_VERSION} \ ; fi ENV PATH /usr/local/cuda/bin:${PATH} # Extras requirements COPY extra/requirements.txt /build/extra/requirements.txt ENV PATH="/root/.cargo/bin:${PATH}" RUN pip install --upgrade pip RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y RUN if [ "${TARGETARCH}" = "amd64" ]; then \ pip install git+https://github.com/suno-ai/bark.git diffusers invisible_watermark transformers accelerate safetensors;\ fi RUN if [ "${BUILD_TYPE}" = "cublas" ] && [ "${TARGETARCH}" = "amd64" ]; then \ pip install torch && pip install auto-gptq;\ fi RUN pip install -r /build/extra/requirements.txt && rm -rf /build/extra/requirements.txt WORKDIR /build # OpenBLAS requirements RUN apt-get install -y libopenblas-dev # Stable Diffusion requirements RUN apt-get install -y libopencv-dev && \ ln -s /usr/include/opencv4/opencv2 /usr/include/opencv2 # piper requirements # Use pre-compiled Piper phonemization library (includes onnxruntime) #RUN if echo "${GO_TAGS}" | grep -q "tts"; then \ RUN test -n "$TARGETARCH" \ || (echo 'warn: missing $TARGETARCH, either set this `ARG` manually, or run using `docker buildkit`') RUN curl -L "https://github.com/gabime/spdlog/archive/refs/tags/v${SPDLOG_VERSION}.tar.gz" | \ tar -xzvf - && \ mkdir -p "spdlog-${SPDLOG_VERSION}/build" && \ cd "spdlog-${SPDLOG_VERSION}/build" && \ cmake .. && \ make -j8 && \ cmake --install . --prefix /usr && mkdir -p "lib/Linux-$(uname -m)" && \ cd /build && \ mkdir -p "lib/Linux-$(uname -m)/piper_phonemize" && \ curl -L "https://github.com/rhasspy/piper-phonemize/releases/download/v${PIPER_PHONEMIZE_VERSION}/libpiper_phonemize-${TARGETARCH:-$(go env GOARCH)}${TARGETVARIANT}.tar.gz" | \ tar -C "lib/Linux-$(uname -m)/piper_phonemize" -xzvf - && ls -liah /build/lib/Linux-$(uname -m)/piper_phonemize/ && \ cp -rfv /build/lib/Linux-$(uname -m)/piper_phonemize/lib/. /usr/lib/ && \ ln -s /usr/lib/libpiper_phonemize.so /usr/lib/libpiper_phonemize.so.1 && \ cp -rfv /build/lib/Linux-$(uname -m)/piper_phonemize/include/. /usr/include/ # \ # ; fi ################################### ################################### FROM requirements as builder ARG GO_TAGS="stablediffusion tts" ENV GO_TAGS=${GO_TAGS} ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility ENV NVIDIA_REQUIRE_CUDA="cuda>=${CUDA_MAJOR_VERSION}.0" ENV NVIDIA_VISIBLE_DEVICES=all ENV CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_AVX=OFF -DLLAMA_FMA=OFF -DLLAMA_F16C=OFF" ENV BUILD_PARALLELISM=${BUILD_PARALLELISM} WORKDIR /build COPY Makefile . RUN make get-sources COPY go.mod . RUN make prepare COPY . . COPY .git . RUN ESPEAK_DATA=/build/lib/Linux-$(uname -m)/piper_phonemize/lib/espeak-ng-data make build -j${BUILD_PARALLELISM:-1} ################################### ################################### FROM requirements ARG FFMPEG ENV REBUILD=false ENV HEALTHCHECK_ENDPOINT=http://localhost:8080/readyz # Add FFmpeg RUN if [ "${FFMPEG}" = "true" ]; then \ apt-get install -y ffmpeg \ ; fi WORKDIR /build # we start fresh & re-copy all assets because `make build` does not clean up nicely after itself # so when `entrypoint.sh` runs `make build` again (which it does by default), the build would fail # see https://github.com/go-skynet/LocalAI/pull/658#discussion_r1241971626 and # https://github.com/go-skynet/LocalAI/pull/434 COPY . . RUN make prepare-sources COPY --from=builder /build/local-ai ./ # Define the health check command HEALTHCHECK --interval=1m --timeout=10m --retries=10 \ CMD curl -f $HEALTHCHECK_ENDPOINT || exit 1 EXPOSE 8080 ENTRYPOINT [ "/build/entrypoint.sh" ] ```
"docker-compose.yaml" ```yaml version: '3.6' services: api: image: 'localai' ports: - 3060:8080 env_file: - .env volumes: - ./models:/models:cached command: ["/usr/bin/local-ai" ] ```
".env" ```env MODELS_PATH='/models' DEBUG='true' ```

HTTP requests `1/2.` Install model: ```js POST, '/models/apply', { "url": "github:go-skynet/model-gallery/gpt4all-j.yaml" } ``` `2/2.` Test: ```js POST, '/chat/completions', { "model": "gpt4all-j", "messages": [{"role": "user", "content": "How are you?"}], "temperature": 0.1 } ```
Docker container log ``` localai-api-1 | @@@@@ localai-api-1 | Skipping rebuild localai-api-1 | @@@@@ localai-api-1 | If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true localai-api-1 | If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed: localai-api-1 | CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF" localai-api-1 | see the documentation at: https://localai.io/basics/build/index.html localai-api-1 | Note: See also https://github.com/go-skynet/LocalAI/issues/288 localai-api-1 | @@@@@ localai-api-1 | CPU info: localai-api-1 | model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz localai-api-1 | flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt lahf_lm epb pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid dtherm ida arat flush_l1d localai-api-1 | CPU: no AVX found localai-api-1 | CPU: no AVX2 found localai-api-1 | CPU: no AVX512 found localai-api-1 | @@@@@ localai-api-1 | 10:51PM DBG no galleries to load localai-api-1 | 10:51PM INF Starting LocalAI using 4 threads, with models path: /models localai-api-1 | 10:51PM INF LocalAI version: v1.23.2-23-g1d1cae8 (1d1cae8e4dd33f107693e1b131bd0dcf37daccf9) localai-api-1 | 10:51PM DBG Extracting backend assets files to /tmp/localai/backend_data localai-api-1 | localai-api-1 | ┌───────────────────────────────────────────────────┐ localai-api-1 | │ Fiber v2.48.0 │ localai-api-1 | │ http://127.0.0.1:8080 │ localai-api-1 | │ (bound on host 0.0.0.0 and port 8080) │ localai-api-1 | │ │ localai-api-1 | │ Handlers ............ 56 Processes ........... 1 │ localai-api-1 | │ Prefork ....... Disabled PID ................ 14 │ localai-api-1 | └───────────────────────────────────────────────────┘ localai-api-1 | localai-api-1 | [127.0.0.1]:41822 200 - GET /readyz localai-api-1 | 10:52PM DBG Listing models from galleries: [] localai-api-1 | 10:52PM DBG Models found from galleries: [] localai-api-1 | [127.0.0.1]:47544 200 - GET /readyz localai-api-1 | [127.0.0.1]:59412 200 - GET /readyz localai-api-1 | [127.0.0.1]:42536 200 - GET /readyz localai-api-1 | [127.0.0.1]:60624 200 - GET /readyz localai-api-1 | [127.0.0.1]:53094 200 - GET /readyz localai-api-1 | [127.0.0.1]:51508 200 - GET /readyz localai-api-1 | [172.20.0.1]:44428 200 - POST /models/apply localai-api-1 | 10:58PM DBG Checking "ggml-gpt4all-j.bin" exists and matches SHA localai-api-1 | 10:58PM DBG Downloading "https://gpt4all.io/models/ggml-gpt4all-j.bin" localai-api-1 | 10:58PM DBG Downloading ggml-gpt4all-j.bin: 83.1 MiB/3.5 GiB (2.30%) ETA: 3m32.337525087s localai-api-1 | 10:58PM DBG Downloading ggml-gpt4all-j.bin: 244.0 MiB/3.5 GiB (6.76%) ETA: 2m18.414227552s localai-api-1 | 10:58PM DBG Downloading ggml-gpt4all-j.bin: 470.4 MiB/3.5 GiB (13.03%) ETA: 1m40.344370313s localai-api-1 | 10:58PM DBG Downloading ggml-gpt4all-j.bin: 710.7 MiB/3.5 GiB (19.69%) ETA: 1m21.73540622s localai-api-1 | 10:58PM DBG Downloading ggml-gpt4all-j.bin: 923.8 MiB/3.5 GiB (25.59%) ETA: 1m12.792317103s localai-api-1 | 10:58PM DBG Downloading ggml-gpt4all-j.bin: 1.1 GiB/3.5 GiB (31.25%) ETA: 1m6.080551458s localai-api-1 | 10:59PM DBG Downloading ggml-gpt4all-j.bin: 1.3 GiB/3.5 GiB (37.65%) ETA: 58.012281125s localai-api-1 | [127.0.0.1]:57348 200 - GET /readyz localai-api-1 | 10:59PM DBG Downloading ggml-gpt4all-j.bin: 1.4 GiB/3.5 GiB (39.93%) ETA: 1m16.58281766s localai-api-1 | 10:59PM DBG Downloading ggml-gpt4all-j.bin: 1.4 GiB/3.5 GiB (39.93%) ETA: 1m34.913637689s localai-api-1 | 10:59PM DBG Downloading ggml-gpt4all-j.bin: 1.5 GiB/3.5 GiB (43.06%) ETA: 1m51.585089357s localai-api-1 | 10:59PM DBG Downloading ggml-gpt4all-j.bin: 1.6 GiB/3.5 GiB (44.93%) ETA: 1m49.592852707s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 1.7 GiB/3.5 GiB (47.67%) ETA: 1m43.667834597s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 1.8 GiB/3.5 GiB (50.06%) ETA: 1m39.407979443s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 1.9 GiB/3.5 GiB (53.01%) ETA: 1m32.784750675s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 1.9 GiB/3.5 GiB (55.31%) ETA: 1m28.602374947s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 2.1 GiB/3.5 GiB (60.32%) ETA: 1m15.435499783s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 2.3 GiB/3.5 GiB (65.80%) ETA: 1m2.213429162s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 2.5 GiB/3.5 GiB (72.24%) ETA: 47.919198644s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 2.8 GiB/3.5 GiB (78.09%) ETA: 36.392779124s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 2.9 GiB/3.5 GiB (82.47%) ETA: 28.633027851s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 3.0 GiB/3.5 GiB (85.40%) ETA: 23.88168525s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 3.2 GiB/3.5 GiB (91.57%) ETA: 13.323967366s localai-api-1 | 11:00PM DBG Downloading ggml-gpt4all-j.bin: 3.4 GiB/3.5 GiB (97.03%) ETA: 4.580631011s localai-api-1 | 11:01PM DBG File "ggml-gpt4all-j.bin" downloaded and verified localai-api-1 | 11:01PM DBG Prompt template "gpt4all-completion" written localai-api-1 | 11:01PM DBG Prompt template "gpt4all-chat" written localai-api-1 | 11:01PM DBG Written config file /models/gpt4all-j.yaml localai-api-1 | [127.0.0.1]:57882 200 - GET /readyz localai-api-1 | [127.0.0.1]:58374 200 - GET /readyz localai-api-1 | [127.0.0.1]:33906 200 - GET /readyz localai-api-1 | [127.0.0.1]:42936 200 - GET /readyz localai-api-1 | 11:05PM DBG Request received: localai-api-1 | 11:05PM DBG Configuration read: &{PredictionOptions:{Model:ggml-gpt4all-j.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.1 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false} Name:gpt4all-j F16:false Threads:4 Debug:true Roles:map[] Embeddings:false Backend:gpt4all-j TemplateConfig:{Chat:gpt4all-chat ChatMessage: Completion:gpt4all-completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 NUMA:false} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false} Step:0} localai-api-1 | 11:05PM DBG Parameters: &{PredictionOptions:{Model:ggml-gpt4all-j.bin Language: N:0 TopP:0.7 TopK:80 Temperature:0.1 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false} Name:gpt4all-j F16:false Threads:4 Debug:true Roles:map[] Embeddings:false Backend:gpt4all-j TemplateConfig:{Chat:gpt4all-chat ChatMessage: Completion:gpt4all-completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:1024 NUMA:false} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false} Step:0} localai-api-1 | 11:05PM DBG Prompt (before templating): How are you? localai-api-1 | 11:05PM DBG Template found, input modified to: The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response. localai-api-1 | ### Prompt: localai-api-1 | How are you? localai-api-1 | ### Response: localai-api-1 | 11:05PM DBG Prompt (after templating): The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response. localai-api-1 | ### Prompt: localai-api-1 | How are you? localai-api-1 | ### Response: localai-api-1 | 11:05PM DBG Loading model gpt4all-j from ggml-gpt4all-j.bin localai-api-1 | 11:05PM DBG Loading model in memory from file: /models/ggml-gpt4all-j.bin localai-api-1 | 11:05PM DBG Loading GRPC Model gpt4all: {backendString:gpt4all-j model:ggml-gpt4all-j.bin threads:4 assetDir:/tmp/localai/backend_data context:0xc000190000 gRPCOptions:0xc00031c000 externalBackends:map[autogptq:/build/extra/grpc/autogptq/autogptq.py bark:/build/extra/grpc/bark/ttsbark.py diffusers:/build/extra/grpc/diffusers/backend_diffusers.py huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py]} localai-api-1 | 11:05PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/gpt4all localai-api-1 | 11:05PM DBG GRPC Service for ggml-gpt4all-j.bin will be running at: '127.0.0.1:33991' localai-api-1 | 11:05PM DBG GRPC Service state dir: /tmp/go-processmanager3508976552 localai-api-1 | 11:05PM DBG GRPC Service Started localai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:33991: connect: connection refused" localai-api-1 | 11:05PM DBG GRPC(ggml-gpt4all-j.bin-127.0.0.1:33991): stderr 2023/08/11 23:05:05 gRPC Server listening at 127.0.0.1:33991 localai-api-1 | 11:05PM DBG GRPC Service Ready localai-api-1 | 11:05PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:ggml-gpt4all-j.bin ContextSize:1024 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:4 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/gpt4all RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/ggml-gpt4all-j.bin Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false} localai-api-1 | 11:05PM DBG GRPC(ggml-gpt4all-j.bin-127.0.0.1:33991): stderr load_model: error 'Model format not supported (no matching implementation found)' localai-api-1 | [172.20.0.1]:44428 500 - POST /chat/completions ```
p-v-a commented 11 months ago

What I should point out here, is that cmake options above targetes llama.cpp, and this one works fine. it is possible that options have no effect on gpt backend, and you endup in the same situation. or it could be completely different issue with gpt backend.

I wonder, if you are able to change backend to llama?

I'm running this in proxmox VM, that runs k8s worker, inside pod that runs local-ai. here is example of my working model config file, though I'm using LLAMA2 7b model, but llama.cpp supports gpt models:

name: gpt-3.5-turbo
backend: llama
parameters:
  model: llama-2-7b-chat.ggmlv3.q4_0.bin
  top_k: 40
  temperature: 0.3
  top_p: 1
context_size: 4096
pasqualesimonetti commented 11 months ago

🟡 Partially solved by downloading again the model.

gpt-3.5-turbo.yaml file in models directory
``` # Model name. # The model name is used to identify the model in the API calls. name: gpt-3.5-turbo # Default model parameters. # These options can also be specified in the API calls parameters: # Relative to the models path model: gpt-3.5-turbo # temperature temperature: 0.3 # all the OpenAI request options here.. top_k: top_p: max_tokens: batch: f16: true ignore_eos: true n_keep: 10 seed: mode: step: negative_prompt: typical_p: tfz: frequency_penalty: mirostat_eta: mirostat_tau: mirostat: rope_freq_base: rope_freq_scale: negative_prompt_scale: # Default context size context_size: 512 # Default number of threads threads: 10 # Define a backend (optional). By default it will try to guess the backend the first time the model is interacted with. backend: gptj # available: llama, stablelm, gpt2, gptj rwkv # stopwords (if supported by the backend) stopwords: - "HUMAN:" - "### Response:" # string to trim space to trimspace: - string # Strings to cut from the response cutstrings: - "string" # Directory used to store additional assets asset_dir: "" # define chat roles roles: user: "HUMAN:" system: "GPT:" assistant: "ASSISTANT:" template: # template file ".tmpl" with the prompt template to use by default on the endpoint call. Note there is no extension in the files completion: completion chat: gpt-3.5-turbo edit: edit_template function: function_template function: disable_no_action: true no_action_function_name: "reply" no_action_description_name: "Reply to the AI assistant" system_prompt: rms_norm_eps: # Set it to 8 for llama2 70b ngqa: 1 ## LLAMA specific options # Enable F16 if backend supports it f16: true # Enable debugging debug: true # Enable embeddings embeddings: true # Mirostat configuration (llama.cpp only) mirostat_eta: 0.8 mirostat_tau: 0.9 mirostat: 1 # GPU Layers (only used when built with cublas) gpu_layers: 22 # Enable memory lock mmlock: true # GPU setting to split the tensor in multiple parts and define a main GPU # see llama.cpp for usage tensor_split: "" main_gpu: "" # Define a prompt cache path (relative to the models) prompt_cache_path: "prompt-cache" # Cache all the prompts prompt_cache_all: true # Read only prompt_cache_ro: false # Enable mmap mmap: true # Enable low vram mode (GPU only) low_vram: true # Set NUMA mode (CPU only) numa: true ```
Runnning locally on a Intel MacBook Pro 2018
``` # Run LocalAI ./local-ai --models-path ./models/ --debug ```
API call 1st
``` # API call 1st curl http://localhost:8080/v1/models # API call 1st response Error null% ``` ``` # API call 1st server log OK [127.0.0.1]:50939 200 - GET /v1/models ```
API call 2nd
``` # API call 2nd curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "gpt-3.5-turbo", "messages": [{"role": "user", "content": "How are you?"}], "temperature": 0.9 }' # API call 2nd response Error {"error":{"code":500,"message":"could not load model: rpc error: code = Unknown desc = failed loading model","type":""}}% ``` ``` # API call 2nd response server log Error 8:03PM DBG Request received: 8:03PM DBG Configuration read: &{PredictionOptions:{Model:gpt-3.5-turbo Language: N:0 TopP:0 TopK:0 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:true IgnoreEOS:true RepeatPenalty:0 Keep:10 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false} Name:gpt-3.5-turbo F16:true Threads:10 Debug:true Roles:map[assistant:ASSISTANT: system:GPT: user:HUMAN:] Embeddings:true Backend:gptj TemplateConfig:{Chat:gpt-3.5-turbo ChatMessage: Completion:completion Edit:edit_template Functions:function_template} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:true NoActionFunctionName:reply NoActionDescriptionName:Reply to the AI assistant} LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:1 PromptCachePath:prompt-cache PromptCacheAll:true PromptCacheRO:false MirostatETA:0.8 MirostatTAU:0.9 Mirostat:1 NGPULayers:22 MMap:true MMlock:true LowVRAM:true Grammar: StopWords:[HUMAN: ### Response:] Cutstrings:[string] TrimSpace:[string] ContextSize:512 NUMA:true} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters:} Step:0} 8:03PM DBG Parameters: &{PredictionOptions:{Model:gpt-3.5-turbo Language: N:0 TopP:0 TopK:0 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:true IgnoreEOS:true RepeatPenalty:0 Keep:10 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false} Name:gpt-3.5-turbo F16:true Threads:10 Debug:true Roles:map[assistant:ASSISTANT: system:GPT: user:HUMAN:] Embeddings:true Backend:gptj TemplateConfig:{Chat:gpt-3.5-turbo ChatMessage: Completion:completion Edit:edit_template Functions:function_template} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:true NoActionFunctionName:reply NoActionDescriptionName:Reply to the AI assistant} LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:1 PromptCachePath:prompt-cache PromptCacheAll:true PromptCacheRO:false MirostatETA:0.8 MirostatTAU:0.9 Mirostat:1 NGPULayers:22 MMap:true MMlock:true LowVRAM:true Grammar: StopWords:[HUMAN: ### Response:] Cutstrings:[string] TrimSpace:[string] ContextSize:512 NUMA:true} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters:} Step:0} 8:03PM DBG Prompt (before templating): HUMAN: How are you? 8:03PM DBG Template failed loading: failed loading a template for gpt-3.5-turbo 8:03PM DBG Prompt (after templating): HUMAN: How are you? 8:03PM DBG Loading model gptj from gpt-3.5-turbo 8:03PM DBG Loading model in memory from file: models/gpt-3.5-turbo 8:03PM DBG Loading GRPC Model gptj: {backendString:gptj model:gpt-3.5-turbo threads:10 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00020a100 externalBackends:map[]} 8:03PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/gptj 8:03PM DBG GRPC Service for gpt-3.5-turbo will be running at: '127.0.0.1:54163' 8:03PM DBG GRPC Service state dir: /var/folders/y6/35qd4vcs3wd2prk4xbpmk3380000gp/T/go-processmanager305159478 8:03PM DBG GRPC Service Started rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:54163: connect: connection refused" 8:03PM DBG GRPC(gpt-3.5-turbo-127.0.0.1:54163): stderr 2023/08/15 20:03:18 gRPC Server listening at 127.0.0.1:54163 8:03PM DBG GRPC Service Ready 8:03PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:gpt-3.5-turbo ContextSize:512 Seed:0 NBatch:512 F16Memory:true MLock:true MMap:true VocabOnly:false LowVRAM:true Embeddings:true NUMA:true NGPULayers:22 MainGPU: TensorSplit: Threads:10 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:1 ModelFile:models/gpt-3.5-turbo Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false} 8:03PM DBG GRPC(gpt-3.5-turbo-127.0.0.1:54163): stderr gptj_model_load: invalid model file 'models/gpt-3.5-turbo' (bad magic) 8:03PM DBG GRPC(gpt-3.5-turbo-127.0.0.1:54163): stderr gptj_bootstrap: failed to load model from 'models/gpt-3.5-turbo' [127.0.0.1]:54162 500 - POST /v1/chat/completions ```
artshade commented 11 months ago

I'm running this in proxmox VM, that runs k8s worker, inside pod that runs local-ai. here is example of my working model config file, though I'm using LLAMA2 7b model, but llama.cpp supports gpt models:

Thank you very much for the share. I am still trying to grasp the idea how to run it all exactly. Currently, even if I do get it somehow running without a crash like at this, it would run the whole day (until manually stopped) at ~4x100% relatively old but CPU Xeon E5620 load (8 physical in total), and nothing in the result. It's like it gets into an infinite loop somewhere and the same is with "gpt4all-j" and "ggml-gpt4all-j.bin" tries:

2023-08-16_19-41-26_Z_GN1KiOpC
artshade commented 11 months ago

@pasqualesimonetti , may I ask how did you get it all running from the very beginning? Like, do you compile the local-ai in the first place on your machine (i.e. modified Dockerfile, REBUILD='true' etc.)? If so, do you modify any environment variables or other Docker configuration/dependencies? How does your docker-compose.yaml file look like and how did you install items into ./model/?

pasqualesimonetti commented 11 months ago

@pasqualesimonetti , may I ask how did you get it all running from the very start beginning? Like, do you compile the local-ai in the first place on your machine (i.e. modified Dockerfile, REBUILD='true' etc.)? If so, do you modify any environment variables or other Docker configuration/dependencies? How does your docker-compose.yaml file look like and how did you install items into ./model/?

I run it locally and not on Docker. I just downloaded the model again.

# Download gpt4all-j to models/
wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

Probably you have to download the Docker image again.

Anyway, even if I have been able to make simple curl requests ( such as curl http://localhost:8080/models/available) still I have RPC error issues when trying to run https://github.com/aorumbayev/autogpt4all or https://github.com/go-skynet/LocalAI/tree/master/examples/autoGPT.

oldunclez commented 11 months ago

I got the same error , I have tried the following two method , but none work. https://github.com/go-skynet/LocalAI/issues/771#issuecomment-1675026506 https://github.com/go-skynet/LocalAI/issues/771#issuecomment-1646789212

failable commented 10 months ago

Same error. That's a bad experience from a simple quick getting start example.

tdzz1102 commented 10 months ago

Maybe this is caused by "out of memory". I tried with docker with 2GB memory limit, monitoring the 'docker stats'. When the memory usege went 2GB it soon went down and gave this error message.

pasqualesimonetti commented 10 months ago

Maybe this is caused by "out of memory". I tried with docker with 2GB memory limit, monitoring the 'docker stats'. When the memory usege went 2GB it soon went down and gave this error message.

Yes, I rebooted my MacBook Pro 2018 with Intel i7 and freed the memory, and then LocalAI loaded the model in memory, and it worked. Even tho on my laptop performance was terribly slow.

Anyway, I have to better test it. So I cannot provide a definitive solution yet.

mokkin commented 10 months ago

I'm getting the same on Ubuntu 20.04 with 16 GB RAM available: api_1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:40173: connect: connection refused"

starchou6 commented 10 months ago

LocalAI version: V1.21 root@63429046747f:/build# ./local-ai --version LocalAI version 4548473 (4548473) llmai-api-1 | 3:04AM DBG Loading model ' Environment, CPU architecture, OS, and Version: Intel X86 centos Describe the bug Grpc error To Reproduce Expected behavior Output response Logs openllama7b' greedly llmai-api-1 | 3:04AM DBG [llama] Attempting to load llmai-api-1 | 3:04AM DBG Loading model llama from openllama7b llmai-api-1 | 3:04AM DBG Loading model in memory from file: /models/openllama7b llmai-api-1 | 3:04AM DBG Loading GRPC Model%!(EXTRA string=llama, model.Options={llama openllama7b 4 /tmp/localai/backend_data 0xc0000400b0 0xc000296a20}) llmai-api-1 | 3:04AM DBG Loading GRPC Process%!(EXTRA string=/tmp/localai/backend_data/backend-assets/grpc/llama) llmai-api-1 | 3:04AM DBG GRPC Service for 'llama' (openllama7b) will be running at: 'localhost:37785' llmai-api-1 | 3:04AM DBG GRPC Service Started llmai-api-1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1">127.0.0.1:37785: connect: connection refused" llmai-api-1 | 3:04AM DBG GRPC(llama-openllama7b-localhost:37785): stderr 2023/07/19 03:04:00 gRPC Server listening at 127.0.0.1:37785 llmai-api-1 | 3:04AM DBG GRPC Service Ready llmai-api-1 | 3:04AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:} sizeCache:0 unknownFields:[] Model:/models/openllama7b ContextSize:512 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:4 LibrarySearchPath:} Additional context

the problem was solved, the error is normal, the word err misunderstood . May be the log should be more clear, when the model not config correctly , it generate many log like "rpc error: code = Unavailable desc = connection error: desc = "

Hi,bro. How did you solve this problem?

maietta commented 9 months ago

BUMP!

I wish there was a book for LocalAI dummies, because I need it right now.

I'm new to the world of AI and really wanting to get this working. I have great hardware, so that shouldn't be an issue.

maietta commented 9 months ago

I'm getting the same on Ubuntu 20.04 with 16 GB RAM available: api_1 | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:40173: connect: connection refused"

Same with Ubuntu 22.04 LTS

Mist-Hunter commented 9 months ago

Related? https://github.com/go-skynet/LocalAI/issues/1109

I can get models to work (very slowly on my dual Xeon's) but so var I can't get a blip out of my GPU while watching nvtop. Same I am able on the same docker instance to run kestr3l/stable-diffusion-webui and nvidia-smi is working in the host VM and in the containers.

I'm getting the same issue with.

          {
            "url": "github:go-skynet/model-gallery/llama2-7b-chat-gguf.yaml",
            "name": "llama2-7b-chat-gguf",
            "overrides": {
              "f16": "true",
              "gpu_layers": "30"
            }
          }

Probably user error on my part but I've been trying for a couple days to find a working web frontend for localAI (ghcr.io/mckaywrigley/chatbot-ui can't select models and only sees GPT 3.5 (CPU only), quay.io/go-skynet/localai-frontend is choking on cors errros and donesn't seem to work at all. I got LibreChat to work https://github.com/danny-avila/LibreChat/issues/1027 but can't get any GPU backed models running.

Thanks for the great container, I really love the idea. Any help is appreciate :)

pasqualesimonetti commented 9 months ago

To those who have "out of memory" issues, I suggest these Huggingface Spaces https://huggingface.co/spaces/hf-accelerate/model-memory-usage, https://huggingface.co/spaces/Vokturz/can-it-run-llm to understand which model can be used on the hardware.

TheRealAlexV commented 7 months ago

Also getting a similar issue here.

.env

THREADS=8
CONTEXT_SIZE=4096
GALLERIES=[{"name":"model-gallery", "url":"github:go-skynet/model-gallery/index.yaml"}, {"url": "github:go-skynet/model-gallery/huggingface.yaml","name":"huggingface"}]
MODELS_PATH=/models
DEBUG=true
COMPEL=0
SINGLE_ACTIVE_BACKEND=true
BUILD_TYPE=cublas
REBUILD=true
GO_TAGS=stablediffusion
IMAGE_PATH=/tmp

docker-compose.yaml

version: '3.6'
services:
  api:
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: 1
              capabilities: [gpu]
    image: quay.io/go-skynet/local-ai:sha-238fec2-cublas-cuda12-ffmpeg-core
    tty: true # enable colorized logs
    restart: always # should this be on-failure ?
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models
      - ./images/:/tmp/generated/images/
    command: ["/usr/bin/local-ai" ]

Request & Error

…/AI/LocalAI שׂ master via 🐹 on ☁️ (us-east-1) 
🕙 19:58:46 ❯❯ curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
                    "model": "llama2-7b-chat-gguf",
                    "messages": [{"role": "user", "content": "How are you?"}],
                    "temperature": 0.9 
                  }'
{"error":{"code":500,"message":"could not load model: rpc error: code = Unknown desc = failed loading model","type":""}}⏎                                          

Container Logs

2023-12-03 19:58:44 12:58AM ERR error processing message {SystemPrompt:You are a helpful assistant, below is a conversation, please respond with the next message and do not ask follow-up questions Role:User: RoleName:user Content:How are you? MessageIndex:0} using template "llama2-7b-chat-gguf-chat": template: prompt:3:5: executing "prompt" at <.Input>: can't evaluate field Input in type model.ChatMessageTemplateData. Skipping!
2023-12-03 19:58:44 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:46545: connect: connection refused"

Debug

2023-12-03 20:06:12 [127.0.0.1]:39930 200 - GET /readyz
2023-12-03 20:06:52 1:06AM DBG Request received: 
2023-12-03 20:06:52 1:06AM DBG Configuration read: &{PredictionOptions:{Model: Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:llama2-7b-chat-gguf F16:false Threads:8 Debug:true Roles:map[assistant:Assitant: assistant_function_call:Function Call: function:Function Result: system:System: user:User:] Embeddings:false Backend:llama TemplateConfig:{Chat: ChatMessage:llama2-7b-chat-gguf-chat Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt:You are a helpful assistant, below is a conversation, please respond with the next message and do not ask follow-up questions TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:4096 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}}
2023-12-03 20:06:52 1:06AM DBG Parameters: &{PredictionOptions:{Model: Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:llama2-7b-chat-gguf F16:false Threads:8 Debug:true Roles:map[assistant:Assitant: assistant_function_call:Function Call: function:Function Result: system:System: user:User:] Embeddings:false Backend:llama TemplateConfig:{Chat: ChatMessage:llama2-7b-chat-gguf-chat Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt:You are a helpful assistant, below is a conversation, please respond with the next message and do not ask follow-up questions TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:4096 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}}
2023-12-03 20:06:52 1:06AM ERR error processing message {SystemPrompt:You are a helpful assistant, below is a conversation, please respond with the next message and do not ask follow-up questions Role:User: RoleName:user Content:How are you? MessageIndex:0} using template "llama2-7b-chat-gguf-chat": template: prompt:3:5: executing "prompt" at <.Input>: can't evaluate field Input in type model.ChatMessageTemplateData. Skipping!
2023-12-03 20:06:52 1:06AM DBG Prompt (before templating): User:How are you?
2023-12-03 20:06:52 1:06AM DBG Template failed loading: failed loading a template for 
2023-12-03 20:06:52 1:06AM DBG Prompt (after templating): User:How are you?
2023-12-03 20:06:52 1:06AM DBG Loading model llama from 
2023-12-03 20:06:52 1:06AM DBG Stopping all backends except ''
2023-12-03 20:06:52 1:06AM DBG Loading model in memory from file: /models
2023-12-03 20:06:52 1:06AM DBG Loading Model  with gRPC (file: /models) (backend: llama): {backendString:llama model: threads:8 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0001da5a0 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh petals:/build/backend/python/petals/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:true parallelRequests:false}
2023-12-03 20:06:52 1:06AM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama
2023-12-03 20:06:52 1:06AM DBG GRPC Service for  will be running at: '127.0.0.1:34533'
2023-12-03 20:06:52 1:06AM DBG GRPC Service state dir: /tmp/go-processmanager3341423294
2023-12-03 20:06:52 1:06AM DBG GRPC Service Started
2023-12-03 20:06:53 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:34533: connect: connection refused"
2023-12-03 20:06:53 1:06AM DBG GRPC(-127.0.0.1:34533): stderr 2023/12/04 01:06:53 gRPC Server listening at 127.0.0.1:34533
2023-12-03 20:06:55 1:06AM DBG GRPC Service Ready
2023-12-03 20:06:55 1:06AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model: ContextSize:4096 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:8 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0}
2023-12-03 20:06:55 1:06AM DBG GRPC(-127.0.0.1:34533): stderr create_gpt_params_cuda: loading model /models
2023-12-03 20:06:55 1:06AM DBG GRPC(-127.0.0.1:34533): stderr ggml_init_cublas: found 1 CUDA devices:
2023-12-03 20:06:55 1:06AM DBG GRPC(-127.0.0.1:34533): stderr   Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9
2023-12-03 20:06:55 1:06AM DBG GRPC(-127.0.0.1:34533): stderr gguf_init_from_file: invalid magic number 00000000
2023-12-03 20:06:55 1:06AM DBG GRPC(-127.0.0.1:34533): stderr error loading model: llama_model_loader: failed to load model from /models
2023-12-03 20:06:55 1:06AM DBG GRPC(-127.0.0.1:34533): stderr 
2023-12-03 20:06:55 1:06AM DBG GRPC(-127.0.0.1:34533): stderr llama_load_model_from_file: failed to load model
2023-12-03 20:06:55 1:06AM DBG GRPC(-127.0.0.1:34533): stderr llama_init_from_gpt_params: error: failed to load model '/models'
2023-12-03 20:06:55 1:06AM DBG GRPC(-127.0.0.1:34533): stderr load_binding_model: error: unable to load model
2023-12-03 20:06:55 [172.18.0.1]:54898 500 - POST /v1/chat/completions
2023-12-03 20:07:12 [127.0.0.1]:37870 200 - GET /readyz

If I change to the lunademo model from the model-gallery (also used in the model setup how-to), I get many more errors in debug:

2023-12-03 20:12:13 [127.0.0.1]:51240 200 - GET /readyz
2023-12-03 20:12:28 1:12AM DBG Request received: 
2023-12-03 20:12:28 1:12AM DBG Configuration read: &{PredictionOptions:{Model:luna-ai-llama2-uncensored.Q4_K_M.gguf Language: N:0 TopP:0 TopK:0 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:lunademo F16:false Threads:10 Debug:true Roles:map[] Embeddings:false Backend:llama TemplateConfig:{Chat:luna-chat-message ChatMessage: Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:4096 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}}
2023-12-03 20:12:28 1:12AM DBG Parameters: &{PredictionOptions:{Model:luna-ai-llama2-uncensored.Q4_K_M.gguf Language: N:0 TopP:0 TopK:0 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:lunademo F16:false Threads:10 Debug:true Roles:map[] Embeddings:false Backend:llama TemplateConfig:{Chat:luna-chat-message ChatMessage: Completion: Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:4096 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMatQ:false DraftModel: NDraft:0 Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}}
2023-12-03 20:12:28 1:12AM DBG Prompt (before templating): How are you?
2023-12-03 20:12:28 1:12AM DBG Template found, input modified to: How are you?
2023-12-03 20:12:28 
2023-12-03 20:12:28 ASSISTANT:
2023-12-03 20:12:28 
2023-12-03 20:12:28 1:12AM DBG Prompt (after templating): How are you?
2023-12-03 20:12:28 
2023-12-03 20:12:28 ASSISTANT:
2023-12-03 20:12:28 
2023-12-03 20:12:28 1:12AM DBG Loading model llama from luna-ai-llama2-uncensored.Q4_K_M.gguf
2023-12-03 20:12:28 1:12AM DBG Stopping all backends except 'luna-ai-llama2-uncensored.Q4_K_M.gguf'
2023-12-03 20:12:28 1:12AM DBG [single-backend] Stopping 
2023-12-03 20:12:28 1:12AM DBG Loading model in memory from file: /models/luna-ai-llama2-uncensored.Q4_K_M.gguf
2023-12-03 20:12:28 1:12AM DBG Loading Model luna-ai-llama2-uncensored.Q4_K_M.gguf with gRPC (file: /models/luna-ai-llama2-uncensored.Q4_K_M.gguf) (backend: llama): {backendString:llama model:luna-ai-llama2-uncensored.Q4_K_M.gguf threads:10 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc0001da5a0 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh huggingface-embeddings:/build/backend/python/sentencetransformers/run.sh petals:/build/backend/python/petals/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:true parallelRequests:false}
2023-12-03 20:12:28 1:12AM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama
2023-12-03 20:12:28 1:12AM DBG GRPC Service for luna-ai-llama2-uncensored.Q4_K_M.gguf will be running at: '127.0.0.1:45223'
2023-12-03 20:12:28 1:12AM DBG GRPC Service state dir: /tmp/go-processmanager3150385545
2023-12-03 20:12:28 1:12AM DBG GRPC Service Started
2023-12-03 20:12:28 rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:45223: connect: connection refused"
2023-12-03 20:12:28 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr 2023/12/04 01:12:28 gRPC Server listening at 127.0.0.1:45223
2023-12-03 20:12:30 1:12AM DBG GRPC Service Ready
2023-12-03 20:12:30 1:12AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:luna-ai-llama2-uncensored.Q4_K_M.gguf ContextSize:4096 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:10 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/luna-ai-llama2-uncensored.Q4_K_M.gguf Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Quantization: MMProj: RopeScaling: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0}
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr create_gpt_params_cuda: loading model /models/luna-ai-llama2-uncensored.Q4_K_M.gguf
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr ggml_init_cublas: found 1 CUDA devices:
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr   Device 0: NVIDIA GeForce RTX 4080, compute capability 8.9
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /models/luna-ai-llama2-uncensored.Q4_K_M.gguf (version GGUF V2 (latest))
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    0:                token_embd.weight q4_K     [  4096, 32000,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    1:              blk.0.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    2:              blk.0.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    3:              blk.0.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    4:         blk.0.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    5:            blk.0.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    6:            blk.0.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    7:              blk.0.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    8:           blk.0.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor    9:            blk.0.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   10:              blk.1.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   11:              blk.1.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   12:              blk.1.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   13:         blk.1.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   14:            blk.1.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   15:            blk.1.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   16:              blk.1.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   17:           blk.1.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   18:            blk.1.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   19:              blk.2.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   20:              blk.2.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   21:              blk.2.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   22:         blk.2.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   23:            blk.2.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   24:            blk.2.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   25:              blk.2.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   26:           blk.2.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   27:            blk.2.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   28:              blk.3.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   29:              blk.3.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   30:              blk.3.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   31:         blk.3.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   32:            blk.3.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   33:            blk.3.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   34:              blk.3.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   35:           blk.3.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   36:            blk.3.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   37:              blk.4.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   38:              blk.4.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   39:              blk.4.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   40:         blk.4.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   41:            blk.4.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   42:            blk.4.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   43:              blk.4.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   44:           blk.4.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   45:            blk.4.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   46:              blk.5.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   47:              blk.5.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   48:              blk.5.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   49:         blk.5.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   50:            blk.5.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   51:            blk.5.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   52:              blk.5.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   53:           blk.5.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   54:            blk.5.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   55:              blk.6.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   56:              blk.6.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   57:              blk.6.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   58:         blk.6.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   59:            blk.6.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   60:            blk.6.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   61:              blk.6.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   62:           blk.6.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   63:            blk.6.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   64:              blk.7.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   65:              blk.7.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   66:              blk.7.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   67:         blk.7.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   68:            blk.7.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   69:            blk.7.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   70:              blk.7.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   71:           blk.7.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   72:            blk.7.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   73:              blk.8.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   74:              blk.8.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   75:              blk.8.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   76:         blk.8.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   77:            blk.8.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   78:            blk.8.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   79:              blk.8.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   80:           blk.8.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   81:            blk.8.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   82:              blk.9.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   83:              blk.9.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   84:              blk.9.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   85:         blk.9.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   86:            blk.9.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   87:            blk.9.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   88:              blk.9.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   89:           blk.9.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   90:            blk.9.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   91:             blk.10.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   92:             blk.10.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   93:             blk.10.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   94:        blk.10.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   95:           blk.10.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   96:           blk.10.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   97:             blk.10.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   98:          blk.10.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor   99:           blk.10.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  100:             blk.11.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  101:             blk.11.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  102:             blk.11.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  103:        blk.11.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  104:           blk.11.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  105:           blk.11.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  106:             blk.11.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  107:          blk.11.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  108:           blk.11.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  109:             blk.12.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  110:             blk.12.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  111:             blk.12.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  112:        blk.12.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  113:           blk.12.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  114:           blk.12.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  115:             blk.12.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  116:          blk.12.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  117:           blk.12.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  118:             blk.13.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  119:             blk.13.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  120:             blk.13.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  121:        blk.13.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  122:           blk.13.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  123:           blk.13.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  124:             blk.13.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  125:          blk.13.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  126:           blk.13.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  127:             blk.14.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  128:             blk.14.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  129:             blk.14.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  130:        blk.14.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  131:           blk.14.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  132:           blk.14.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  133:             blk.14.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  134:          blk.14.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  135:           blk.14.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  136:             blk.15.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  137:             blk.15.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  138:             blk.15.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  139:        blk.15.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  140:           blk.15.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  141:           blk.15.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  142:             blk.15.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  143:          blk.15.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  144:           blk.15.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  145:             blk.16.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  146:             blk.16.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  147:             blk.16.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  148:        blk.16.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  149:           blk.16.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  150:           blk.16.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  151:             blk.16.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  152:          blk.16.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  153:           blk.16.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  154:             blk.17.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  155:             blk.17.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  156:             blk.17.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  157:        blk.17.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  158:           blk.17.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  159:           blk.17.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  160:             blk.17.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  161:          blk.17.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  162:           blk.17.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  163:             blk.18.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  164:             blk.18.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  165:             blk.18.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  166:        blk.18.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  167:           blk.18.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  168:           blk.18.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  169:             blk.18.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  170:          blk.18.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  171:           blk.18.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  172:             blk.19.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  173:             blk.19.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  174:             blk.19.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  175:        blk.19.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  176:           blk.19.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  177:           blk.19.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  178:             blk.19.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  179:          blk.19.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  180:           blk.19.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  181:             blk.20.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  182:             blk.20.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  183:             blk.20.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  184:        blk.20.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  185:           blk.20.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  186:           blk.20.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  187:             blk.20.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  188:          blk.20.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  189:           blk.20.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  190:             blk.21.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  191:             blk.21.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  192:             blk.21.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  193:        blk.21.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  194:           blk.21.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  195:           blk.21.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  196:             blk.21.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  197:          blk.21.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  198:           blk.21.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  199:             blk.22.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  200:             blk.22.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  201:             blk.22.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  202:        blk.22.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  203:           blk.22.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  204:           blk.22.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  205:             blk.22.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  206:          blk.22.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  207:           blk.22.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  208:             blk.23.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  209:             blk.23.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  210:             blk.23.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  211:        blk.23.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  212:           blk.23.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  213:           blk.23.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  214:             blk.23.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  215:          blk.23.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  216:           blk.23.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  217:             blk.24.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  218:             blk.24.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  219:             blk.24.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  220:        blk.24.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  221:           blk.24.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  222:           blk.24.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  223:             blk.24.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  224:          blk.24.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  225:           blk.24.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  226:             blk.25.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  227:             blk.25.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  228:             blk.25.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  229:        blk.25.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  230:           blk.25.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  231:           blk.25.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  232:             blk.25.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  233:          blk.25.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  234:           blk.25.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  235:             blk.26.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  236:             blk.26.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  237:             blk.26.attn_v.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  238:        blk.26.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  239:           blk.26.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  240:           blk.26.ffn_down.weight q4_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  241:             blk.26.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  242:          blk.26.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  243:           blk.26.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  244:             blk.27.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  245:             blk.27.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  246:             blk.27.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  247:        blk.27.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  248:           blk.27.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  249:           blk.27.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  250:             blk.27.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  251:          blk.27.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  252:           blk.27.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  253:             blk.28.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  254:             blk.28.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  255:             blk.28.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  256:        blk.28.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  257:           blk.28.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  258:           blk.28.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  259:             blk.28.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  260:          blk.28.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  261:           blk.28.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  262:             blk.29.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  263:             blk.29.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  264:             blk.29.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  265:        blk.29.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  266:           blk.29.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  267:           blk.29.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  268:             blk.29.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  269:          blk.29.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  270:           blk.29.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  271:             blk.30.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  272:             blk.30.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  273:             blk.30.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  274:        blk.30.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  275:           blk.30.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  276:           blk.30.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  277:             blk.30.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  278:          blk.30.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  279:           blk.30.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  280:             blk.31.attn_q.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  281:             blk.31.attn_k.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  282:             blk.31.attn_v.weight q6_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  283:        blk.31.attn_output.weight q4_K     [  4096,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  284:           blk.31.ffn_gate.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  285:           blk.31.ffn_down.weight q6_K     [ 11008,  4096,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  286:             blk.31.ffn_up.weight q4_K     [  4096, 11008,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  287:          blk.31.attn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  288:           blk.31.ffn_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  289:               output_norm.weight f32      [  4096,     1,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - tensor  290:                    output.weight q6_K     [  4096, 32000,     1,     1 ]
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   0:                       general.architecture str     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   1:                               general.name str     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   2:                       llama.context_length u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   3:                     llama.embedding_length u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   4:                          llama.block_count u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   5:                  llama.feed_forward_length u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   6:                 llama.rope.dimension_count u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   7:                 llama.attention.head_count u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   8:              llama.attention.head_count_kv u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv  10:                          general.file_type u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv  11:                       tokenizer.ggml.model str     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv  17:            tokenizer.ggml.padding_token_id u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - kv  18:               general.quantization_version u32     
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - type  f32:   65 tensors
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - type q4_K:  193 tensors
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_model_loader: - type q6_K:   33 tensors
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: format         = GGUF V2 (latest)
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: arch           = llama
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: vocab type     = SPM
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_vocab        = 32000
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_merges       = 0
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_ctx_train    = 2048
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_ctx          = 4096
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_embd         = 4096
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_head         = 32
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_head_kv      = 32
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_layer        = 32
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_rot          = 128
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_gqa          = 1
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: f_norm_eps     = 0.0e+00
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: f_norm_rms_eps = 1.0e-05
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: n_ff           = 11008
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: freq_base      = 10000.0
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: freq_scale     = 1
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: model type     = 7B
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: model ftype    = mostly Q4_K - Medium
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: model params   = 6.74 B
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: model size     = 3.80 GiB (4.84 BPW) 
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: general.name   = tap-m_luna-ai-llama2-uncensored
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: BOS token = 1 '<s>'
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: EOS token = 2 '</s>'
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: UNK token = 0 '<unk>'
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: PAD token = 0 '<unk>'
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_print_meta: LF token  = 13 '<0x0A>'
2023-12-03 20:12:30 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_tensors: ggml ctx size = 3891.34 MB
2023-12-03 20:12:32 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_tensors: using CUDA for GPU acceleration
2023-12-03 20:12:32 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_tensors: mem required  = 3891.34 MB (+ 4096.00 MB per state)
2023-12-03 20:12:32 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_tensors: offloading 0 repeating layers to GPU
2023-12-03 20:12:32 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_tensors: offloaded 0/35 layers to GPU
2023-12-03 20:12:32 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llm_load_tensors: VRAM used: 0 MB
2023-12-03 20:12:35 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr ..................................................................................................
2023-12-03 20:12:37 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_new_context_with_model: kv self size  = 4096.00 MB
2023-12-03 20:12:37 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_new_context_with_model: compute buffer total size =  281.47 MB
2023-12-03 20:12:37 1:12AM DBG GRPC(luna-ai-llama2-uncensored.Q4_K_M.gguf-127.0.0.1:45223): stderr llama_new_context_with_model: VRAM scratch buffer: 280.00 MB
2023-12-03 20:13:13 [127.0.0.1]:34806 200 - GET /readyz
BackMountainDevil commented 7 months ago

-rw-rw-r-- 1 root root 3785248281 Jul 23 10:36 ggml-gpt4all-j.bin -rw-r--r-- 1 root root 179 Jul 23 11:01 gpt4all-chat.tmpl -rw-r--r-- 1 root root 57 Jul 23 11:01 gpt4all-completion.tmpl -rw-r--r-- 1 root root 202 Jul 23 11:01 gpt4all-j.yaml

@finger42 could you please show the three txt file content(tmpl, yaml) ?

here is what I did.

# Download gpt4all-j to models/
LocalAI$ wget https://gpt4all.io/models/ggml-gpt4all-j.bin -O models/ggml-gpt4all-j

LocalAI$ sha1sum models/ggml-gpt4all-j
ccfbaf3e2406f442f3b4fefb8672e706aafd522a  models/ggml-gpt4all-j

# Use a template from the examples
LocalAI$ cp -rf prompt-templates/ggml-gpt4all-j.tmpl models/

LocalAI$ docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4
TheRealAlexV commented 7 months ago

I'm still having this issue. It should not have been marked as completed.

lunamidori5 commented 7 months ago

@TheRealAlexV my bad must have been a miss click as your right, should not have been closed, did you follow the how tos on the site?

Moanrisy commented 7 months ago

I had the same problem the model I used was alpaca.13b.ggmlv3.q8_0.bin with another model it worked ggml-model-gpt4all-falcon-q4_0.bin. Could it be because the alpaca.13b.ggmlv3.q8_0.bin model is a GPU model?

does it working with other model?

I just tried the llm that mentioned in the example (luna-ai) and have this problem too

wizkidweb commented 6 months ago

I'm also encountering this problem. I have attempted with several models, including ones supported in the model gallery. The same error:

Loading model 'ggml-gpt4all-j-v1.3-groovy.bin' with backend bert-embeddings
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:34109: connect: connection refused"

using this command to launch within a VM:

docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4
chris-hatton commented 6 months ago

I could understand any LocalAI Dev or contributor being frustrated seeing so many reports of:

transport: Error while dialing: dial tcp 127.0.0.1:nnnnn

...since it's clear this is not an underlying error, but a symptom of the back-end failing, for any number of reasons.

This does highlight a design issue in the way that failures are surfaced from LocalAI. This will continue to be a pain-point, unless a way can be found to surface the 'real' errors better. I think that deserves to be a priority: without a resolution, I guarantee LocalAI will continue shedding users, no matter how good the rest of the features.

thiswillbeyourgithub commented 6 months ago

I could understand any LocalAI Dev or contributor being frustrated seeing so many reports of:

transport: Error while dialing: dial tcp 127.0.0.1:34109

...since it seems fairly clear that this is not an underlying error, but rather a symptom of a back-end portion failing for any number of reasons. This does highlight a design issue in the way that failures are surfaced from LocalAI. I think this is going to go on being a 'quality of life' problem unless a way can be found to surface the 'real' errors better.

I concur. To me this is clearly an unexpected behavior: the error message that we see and notice has to be more specific than that. I still have very little clue why I have never succeded in running LocalAI for months now (trying every new release, different models, different build etc) but always gave up because finding the root cause without clear error message is too inefficient.

BackMountainDevil commented 6 months ago

well. I do not know many about go. But I tried the localai docker and bin. I found help in bin. But I can not run three bin. What I want to say is that you can add --debug true to run docker to debug with more info. such as

docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4 --debug true
thiswillbeyourgithub commented 6 months ago

I did try adding any debug flag I find, it never got me passed that error message

chris-hatton commented 6 months ago

I did try adding any debug flag I find, it never got me passed that error message

Same here. I don't know enough about LocalAI's architecture to describe it better but it's evident from Issue reports here that there's an entire layer of error reporting being effectively hidden from users, at least in the Docker image setup.

BackMountainDevil commented 6 months ago

I remember debug would give more info. So I do it again to show the diff. This is only for proved that turn debug on would show more info. I am not sure about how to set up the value of other params(context_size, f16, top_p...) or tmpl.

gpt-3.5-turbo-16k.yaml

name: gpt-3.5-turbo-16k-0613
parameters:
  model: ggml-gpt4all-j
  top_k: 80
  temperature: 0.2
  top_p: 0.7
  f16: true
context_size: 16384

template:
  completion: ggml-gpt4all-j
  chat: ggml-gpt4all-j

ggml-gpt4all-j.tmpl

The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
{{.Input}}
### Response:

without debug

$ curl http://localhost:8080/v1/models
{"object":"list","data":[{"id":"open_llama_7b_v2","object":"model"},{"id":"gpt-3.5-turbo-16k-0613","object":"model"},{"id":"WizardLM-7B-uncensored-GGUF","object":"model"},{"id":"alpaca-cache","object":"model"},{"id":"luna-ai-llama2","object":"model"},{"id":"models","object":"model"}]}

$ curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
>      "model": "gpt-3.5-turbo-16k-0613",
>      "messages": [{"role": "user", "content": "How are you?"}],
>      "temperature": 0.9
>    }'
{"created":1704535407,"object":finish_reason":"stop","message"
$ docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 16000 --threads 10
@@@@@
Skipping rebuild
@@@@@
If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
see the documentation at: https://localai.io/basics/build/index.html
Note: See also https://github.com/go-skynet/LocalAI/issues/288
@@@@@
CPU info:
model name      : Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
CPU:    AVX    found OK
CPU:    AVX2   found OK
CPU:    AVX512 found OK
@@@@@
10:03AM INF Starting LocalAI using 10 threads, with models path: /models
10:03AM INF LocalAI version: v1.40.0 (6ef7ea2635ae5371be4e6eef516c2ad4afd9f4a3)

 ┌───────────────────────────────────────────────────┐ 
 │                   Fiber v2.50.0                   │ 
 │               http://127.0.0.1:8080               │ 
 │       (bound on host 0.0.0.0 and port 8080)       │ 
 │                                                   │ 
 │ Handlers ............ 73  Processes ........... 1 │ 
 │ Prefork ....... Disabled  PID ................ 14 │ 
 └───────────────────────────────────────────────────┘ 

rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:37487: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:38119: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:32775: connect: connection refused"
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:44117: connect: connection refused"

with debug

$ curl http://localhost:8080/v1/models
{"object":"list","data":[{"id":"gpt-3.5-turbo-16k-0613","object":"model"},{"id":"open_llama_7b_v2","object":"model"},{"id":"WizardLM-7B-uncensored-GGUF","object":"model"},{"id":"alpaca-cache","object":"model"},{"id":"luna-ai-llama2","object":"model"},{"id":"models","object":"model"}]}

$ curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
>      "model": "gpt-3.5-turbo-16k-0613",
>      "messages": [{"role": "user", "content": "How are you?"}],
>      "temperature": 0.9
>    }'
{"created":1704535772,"object":"chat.completion","id":"bdd5fa1b-35b7-4b56-b25b-95bc3d86af6b","model":"gpt-3.5-turbo-16k-0613","choices":[{"index":0,"finish_reason":"stop","message":{"role":"assistant","content":"I'm doing well. How about you?"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
$ docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 16000 --threads 10 --debug true
@@@@@
Skipping rebuild
@@@@@
If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
see the documentation at: https://localai.io/basics/build/index.html
Note: See also https://github.com/go-skynet/LocalAI/issues/288
@@@@@
CPU info:
model name      : Intel(R) Xeon(R) Platinum 8358P CPU @ 2.60GHz
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid md_clear pconfig flush_l1d arch_capabilities
CPU:    AVX    found OK
CPU:    AVX2   found OK
CPU:    AVX512 found OK
@@@@@
10:09AM INF Starting LocalAI using 10 threads, with models path: /models
10:09AM INF LocalAI version: v1.40.0 (6ef7ea2635ae5371be4e6eef516c2ad4afd9f4a3)
10:09AM DBG Model: gpt-3.5-turbo-16k-0613 (config: {PredictionOptions:{Model:ggml-gpt4all-j Language: N:0 TopP:0.7 TopK:80 Temperature:0.2 Maxtokens:0 Echo:false Batch:0 F16:true IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo-16k-0613 F16:false Threads:0 Debug:false Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat:ggml-gpt4all-j ChatMessage: Completion:ggml-gpt4all-j Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:16384 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}})
10:09AM DBG Model: open_llama_7b_v2 (config: {PredictionOptions:{Model:openlm-research-open_llama_7b_v2-Q8_0.gguf Language: N:0 TopP:0 TopK:0 Temperature:0.3 Maxtokens:0 Echo:false Batch:0 F16:false IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:open_llama_7b_v2 F16:false Threads:10 Debug:false Roles:map[assistant:### Response: system:### System Instruction: user:### Instruction:] Embeddings:false Backend:llama TemplateConfig:{Chat:llama2-chat-message ChatMessage: Completion:completion Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath:alpaca-cache PromptCacheAll:true PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[HUMAN: ### Response:] Cutstrings:[] TrimSpace:[] ContextSize:512 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}})
10:09AM DBG Extracting backend assets files to /tmp/localai/backend_data

 ┌───────────────────────────────────────────────────┐ 
 │                   Fiber v2.50.0                   │ 
 │               http://127.0.0.1:8080               │ 
 │       (bound on host 0.0.0.0 and port 8080)       │ 
 │                                                   │ 
 │ Handlers ............ 74  Processes ........... 1 │ 
 │ Prefork ....... Disabled  PID ................ 14 │ 
 └───────────────────────────────────────────────────┘ 

[172.17.0.1]:34754 200 - GET /v1/models
10:10AM DBG Request received: 
10:10AM DBG Configuration read: &{PredictionOptions:{Model:ggml-gpt4all-j Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:true IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo-16k-0613 F16:false Threads:10 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat:ggml-gpt4all-j ChatMessage: Completion:ggml-gpt4all-j Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:16384 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}}
10:10AM DBG Parameters: &{PredictionOptions:{Model:ggml-gpt4all-j Language: N:0 TopP:0.7 TopK:80 Temperature:0.9 Maxtokens:0 Echo:false Batch:0 F16:true IgnoreEOS:false RepeatPenalty:0 Keep:0 MirostatETA:0 MirostatTAU:0 Mirostat:0 FrequencyPenalty:0 TFZ:0 TypicalP:0 Seed:0 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:gpt-3.5-turbo-16k-0613 F16:false Threads:10 Debug:true Roles:map[] Embeddings:false Backend: TemplateConfig:{Chat:ggml-gpt4all-j ChatMessage: Completion:ggml-gpt4all-j Edit: Functions:} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{DisableNoAction:false NoActionFunctionName: NoActionDescriptionName:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNormEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0 MirostatTAU:0 Mirostat:0 NGPULayers:0 MMap:false MMlock:false LowVRAM:false Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] ContextSize:16384 NUMA:false LoraAdapter: LoraBase: NoMulMatQ:false DraftModel: NDraft:0 Quantization:} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{PipelineType: SchedulerType: CUDA:false EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:}}
10:10AM DBG Prompt (before templating): How are you?
10:10AM DBG Template found, input modified to: The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
How are you?
### Response:

10:10AM DBG Prompt (after templating): The prompt below is a question to answer, a task to complete, or a conversation to respond to; decide which and write an appropriate response.
### Prompt:
How are you?
### Response:

10:10AM DBG Loading model 'ggml-gpt4all-j' greedly from all the available backends: llama-cpp, llama-stable, llama, gpt4all, gptneox, bert-embeddings, falcon-ggml, gptj, gpt2, dolly, mpt, replit, starcoder, rwkv, whisper, stablediffusion, piper, /build/extra/grpc/diffusers/backend_diffusers.py, /build/extra/grpc/exllama/exllama.py, /build/extra/grpc/vall-e-x/ttsvalle.py, /build/extra/grpc/vllm/backend_vllm.py, /build/extra/grpc/huggingface/huggingface.py, /build/extra/grpc/autogptq/autogptq.py, /build/extra/grpc/bark/ttsbark.py
10:10AM DBG [llama-cpp] Attempting to load
10:10AM DBG Loading model llama-cpp from ggml-gpt4all-j
10:10AM DBG Loading model in memory from file: /models/ggml-gpt4all-j
10:10AM DBG Loading GRPC Model llama-cpp: {backendString:llama-cpp model:ggml-gpt4all-j threads:10 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00060d6c0 externalBackends:map[autogptq:/build/extra/grpc/autogptq/autogptq.py bark:/build/extra/grpc/bark/ttsbark.py diffusers:/build/extra/grpc/diffusers/backend_diffusers.py exllama:/build/extra/grpc/exllama/exllama.py huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py vall-e-x:/build/extra/grpc/vall-e-x/ttsvalle.py vllm:/build/extra/grpc/vllm/backend_vllm.py] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false}
10:10AM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp
10:10AM DBG GRPC Service for ggml-gpt4all-j will be running at: '127.0.0.1:45957'
10:10AM DBG GRPC Service state dir: /tmp/go-processmanager2459821549
10:10AM DBG GRPC Service Started
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:45957: connect: connection refused"
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45957): stdout Server listening on 127.0.0.1:45957
10:10AM DBG GRPC Service Ready
10:10AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:ggml-gpt4all-j ContextSize:16384 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:10 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/ggml-gpt4all-j Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: NoMulMatQ:false DraftModel: AudioPath: Quantization:}
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45957): stderr gguf_init_from_file: invalid magic characters lmgg.
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45957): stderr error loading model: llama_model_loader: failed to load model from /models/ggml-gpt4all-j
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45957): stderr 
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45957): stderr llama_load_model_from_file: failed to load model
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45957): stderr llama_init_from_gpt_params: error: failed to load model '/models/ggml-gpt4all-j'
10:10AM DBG [llama-cpp] Fails: could not load model: rpc error: code = Canceled desc = 
10:10AM DBG [llama-stable] Attempting to load
10:10AM DBG Loading model llama-stable from ggml-gpt4all-j
10:10AM DBG Loading model in memory from file: /models/ggml-gpt4all-j
10:10AM DBG Loading GRPC Model llama-stable: {backendString:llama-stable model:ggml-gpt4all-j threads:10 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00060d6c0 externalBackends:map[autogptq:/build/extra/grpc/autogptq/autogptq.py bark:/build/extra/grpc/bark/ttsbark.py diffusers:/build/extra/grpc/diffusers/backend_diffusers.py exllama:/build/extra/grpc/exllama/exllama.py huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py vall-e-x:/build/extra/grpc/vall-e-x/ttsvalle.py vllm:/build/extra/grpc/vllm/backend_vllm.py] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false}
10:10AM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-stable
10:10AM DBG GRPC Service for ggml-gpt4all-j will be running at: '127.0.0.1:45383'
10:10AM DBG GRPC Service state dir: /tmp/go-processmanager3031435554
10:10AM DBG GRPC Service Started
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:45383: connect: connection refused"
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45383): stderr 2024/01/06 10:10:05 gRPC Server listening at 127.0.0.1:45383
10:10AM DBG GRPC Service Ready
10:10AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:ggml-gpt4all-j ContextSize:16384 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:10 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/ggml-gpt4all-j Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: NoMulMatQ:false DraftModel: AudioPath: Quantization:}
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45383): stderr create_gpt_params: loading model /models/ggml-gpt4all-j
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45383): stderr llama.cpp: loading model from /models/ggml-gpt4all-j
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45383): stderr error loading model: unexpectedly reached end of file
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45383): stderr llama_load_model_from_file: failed to load model
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45383): stderr llama_init_from_gpt_params: error: failed to load model '/models/ggml-gpt4all-j'
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45383): stderr load_binding_model: error: unable to load model
10:10AM DBG [llama-stable] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
10:10AM DBG [llama] Attempting to load
10:10AM DBG Loading model llama from ggml-gpt4all-j
10:10AM DBG Loading model in memory from file: /models/ggml-gpt4all-j
10:10AM DBG Loading GRPC Model llama: {backendString:llama model:ggml-gpt4all-j threads:10 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00060d6c0 externalBackends:map[autogptq:/build/extra/grpc/autogptq/autogptq.py bark:/build/extra/grpc/bark/ttsbark.py diffusers:/build/extra/grpc/diffusers/backend_diffusers.py exllama:/build/extra/grpc/exllama/exllama.py huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py vall-e-x:/build/extra/grpc/vall-e-x/ttsvalle.py vllm:/build/extra/grpc/vllm/backend_vllm.py] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false}
10:10AM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama
10:10AM DBG GRPC Service for ggml-gpt4all-j will be running at: '127.0.0.1:45727'
10:10AM DBG GRPC Service state dir: /tmp/go-processmanager1207006948
10:10AM DBG GRPC Service Started
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:45727: connect: connection refused"
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45727): stderr 2024/01/06 10:10:16 gRPC Server listening at 127.0.0.1:45727
10:10AM DBG GRPC Service Ready
10:10AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:ggml-gpt4all-j ContextSize:16384 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:10 LibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/ggml-gpt4all-j Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: NoMulMatQ:false DraftModel: AudioPath: Quantization:}
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45727): stderr create_gpt_params: loading model /models/ggml-gpt4all-j
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45727): stderr gguf_init_from_file: invalid magic number 67676d6c
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45727): stderr error loading model: llama_model_loader: failed to load model from /models/ggml-gpt4all-j
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45727): stderr 
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45727): stderr llama_load_model_from_file: failed to load model
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45727): stderr llama_init_from_gpt_params: error: failed to load model '/models/ggml-gpt4all-j'
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:45727): stderr load_binding_model: error: unable to load model
10:10AM DBG [llama] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
10:10AM DBG [gpt4all] Attempting to load
10:10AM DBG Loading model gpt4all from ggml-gpt4all-j
10:10AM DBG Loading model in memory from file: /models/ggml-gpt4all-j
10:10AM DBG Loading GRPC Model gpt4all: {backendString:gpt4all model:ggml-gpt4all-j threads:10 assetDir:/tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00060d6c0 externalBackends:map[autogptq:/build/extra/grpc/autogptq/autogptq.py bark:/build/extra/grpc/bark/ttsbark.py diffusers:/build/extra/grpc/diffusers/backend_diffusers.py exllama:/build/extra/grpc/exllama/exllama.py huggingface-embeddings:/build/extra/grpc/huggingface/huggingface.py vall-e-x:/build/extra/grpc/vall-e-x/ttsvalle.py vllm:/build/extra/grpc/vllm/backend_vllm.py] grpcAttempts:20 grpcAttemptsDelay:2 singleActiveBackend:false}
10:10AM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/gpt4all
10:10AM DBG GRPC Service for ggml-gpt4all-j will be running at: '127.0.0.1:34513'
10:10AM DBG GRPC Service state dir: /tmp/go-processmanager1993281322
10:10AM DBG GRPC Service Started
rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:34513: connect: connection refused"
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stderr 2024/01/06 10:10:18 gRPC Server listening at 127.0.0.1:34513
10:10AM DBG GRPC Service Ready
10:10AM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:ggml-gpt4all-j ContextSize:16384 Seed:0 NBatch:512 F16Memory:false MLock:false MMap:false VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:0 MainGPU: TensorSplit: Threads:10 LibrarySearchPath:/tmp/localai/backend_data/backend-assets/gpt4all RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/ggml-gpt4all-j Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: SchedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 Tokenizer: LoraBase: LoraAdapter: NoMulMatQ:false DraftModel: AudioPath: Quantization:}
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: loading model from '/models/ggml-gpt4all-j' - please wait ...
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: n_vocab = 50400
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: n_ctx   = 2048
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: n_embd  = 4096
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: n_head  = 16
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: n_layer = 28
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: n_rot   = 64
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: f16     = 2
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: ggml ctx size = 5401.45 MB
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: kv self size  =  896.00 MB
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: ................................... done
10:10AM DBG GRPC(ggml-gpt4all-j-127.0.0.1:34513): stdout gptj_model_load: model size =  3609.38 MB / num tensors = 285
10:10AM DBG [gpt4all] Loads OK
10:10AM DBG Response: {"created":1704535772,"object":"chat.completion","id":"bdd5fa1b-35b7-4b56-b25b-95bc3d86af6b","model":"gpt-3.5-turbo-16k-0613","choices":[{"index":0,"finish_reason":"stop","message":{"role":"assistant","content":"I'm doing well. How about you?"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
[172.17.0.1]:34764 200 - POST /v1/chat/completions
[127.0.0.1]:50926 200 - GET /readyz
[127.0.0.1]:50952 200 - GET /readyz
gerroon commented 6 months ago

I have the same issue, I have been battling this for two days without any solution. Clearly there is a bug somewhere given so many users are reporting it. I have this issue with the released avx binaries and Docker on WSL2.

mtadeunet commented 6 months ago

Same issue here.

gerroon commented 6 months ago

I followed this simple tutorial to be sure, same error

https://dev.to/worldlinetech/introducing-localai-4gg1

curl $HOST/v1/chat/completions -H "Content-Type: application/json" -d '{ "model": "luna-ai-llama2", "messages": [{"role": "user", "content": "Why is the Earth round?"}], "temperature": 0.9 }'


localai-api-1  | 5:27AM INF Loading model 'luna-ai-llama2' greedly from all the available backends: llama-cpp, llama-ggml, llama, gpt4all, gptneox, bert-embeddings, falcon-ggml, gptj, gpt2, dolly, mpt, replit, starcoder, rwkv, whisper, stablediffusion, tinydream, piper, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/bark/run.sh, /build/backend/python/diffusers/run.sh, /build/backend/python/exllama/run.sh, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/autogptq/run.sh, /build/backend/python/coqui/run.sh, /build/backend/python/vllm/run.sh, /build/backend/python/vall-e-x/run.sh, /build/backend/python/transformers-musicgen/run.sh, /build/backend/python/petals/run.sh, /build/backend/python/transformers/run.sh, /build/backend/python/exllama2/run.sh
localai-api-1  | 5:27AM INF [llama-cpp] Attempting to load
localai-api-1  | 5:27AM INF Loading model 'luna-ai-llama2' with backend llama-cpp
localai-api-1  | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:44363: connect: connection refused"
localai-api-1  | rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:44363: connect: connection refused"