nextcloud / integration_openai

OpenAI integration in Nextcloud
GNU Affero General Public License v3.0
51 stars 11 forks source link

LocalAI Integration in Nextcloud AIO not working #129

Closed apfelcast closed 1 month ago

apfelcast commented 1 month ago

Which version of integration_openai are you using?

2.0.3

Which version of Nextcloud are you using?

29.0.6

Which browser are you using? In case you are using the phone App, specify the Android or iOS version and device please.

Safari 17.6

Describe the Bug

I am using Nextcloud AIO with the Local AI community container. When I try to use AI features through Nextcloud Assistant I do not get a response. After a while the following error appears in the Nextcloud log.

Fehler bei der API-Anfrage:could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed ...

This error was already further debugged from @szaimen here: https://github.com/nextcloud/all-in-one/issues/5299

Expected Behavior

Getting an response from the AI.

To Reproduce

Build up an Nextcloud AIO instance using the following prompt:

docker run --init --sig-proxy=false --name nextcloud-aio-mastercontainer --restart always --publish 8080:8080 --env APACHE_PORT=11000 --env APACHE_IP_BINDING=0.0.0.0 --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config --volume /var/run/docker.sock:/var/run/docker.sock:ro --env AIO_COMMUNITY_CONTAINERS="local-ai" nextcloud/all-in-one:latest

julien-nc commented 1 month ago

Are there models listed in AI admin settings in the OpenAI/LocalAI section? image

If not, there are no model files in LocalAI. If there are models: Did you choose a model there?

If you chose one, then LocalAI is failing to load it.

szaimen commented 1 month ago

@julien-nc the problem is if only one model is available.

szaimen commented 1 month ago

Then the one is pre-selexted and not sent into the appconfig

julien-nc commented 1 month ago

@szaimen I'm on it, thanks for the details.

julien-nc commented 1 month ago

@apfelcast Can you try this? occ config:app:set integration_openai default_completion_model_id --value MODEL_ID replacing MODEL_ID by the ID of the model you see in the NC OpenAI/LocalAI admin settings.

julien-nc commented 1 month ago

The potential fix will be included in the next release which is coming soon. Once the app has been updated, it is still necessary to browse at least once the AI admin settings for the model values to be stored correctly.

Let's reopen this issue if necessary after some real-life tests.

julien-nc commented 1 month ago

@apfelcast @szaimen The fix is included in integration_openai v3.1.1 if you feel like trying it to check it works fine now.

szaimen commented 1 month ago

FYI: I've adjusted the setup instructions accrdingly: https://github.com/nextcloud/all-in-one/blob/main/community-containers/local-ai/readme.md

k1n6b0b commented 3 weeks ago

FYI: I've adjusted the setup instructions accrdingly: https://github.com/nextcloud/all-in-one/blob/main/community-containers/local-ai/readme.md

I'm still not able to follow the setup instructions (sorry) I have a blank box in the NC AI admin page

I dont use/have the admin account disabled -- so I edited the file manually (enhancement request here?)

After the container was started the first time, you should see a new nextcloud-aio-local-ai folder when you open the files app with the default admin user. In there you should see a models.yaml config file. You can now add models in there. Please refer here where you can get further urls that you can put in there. Afterwards restart all containers from the AIO interface and the models should automatically get downloaded by the local-ai container and activated.

Could recommended (free?) models be provided to start with? I am too AI-Noob i thnk to be successful here

szaimen commented 3 weeks ago

Did you actually enabke the container inside AIO?

k1n6b0b commented 3 weeks ago

Meaning --env AIO_COMMUNITY_CONTAINERS="facerecognition local-ai" \

Yes - and its running --


CPU info:
model name      : Intel(R) Xeon(R) CPU D-1537 @ 1.70GHz
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat vnmi umip md_clear flush_l1d arch_capabilities
CPU:    AVX    found OK
CPU:    AVX2   found OK
CPU: no AVX512 found
nc: getaddrinfo for host "nextcloud-aio-nextcloud" port 9001: Temporary failure in name resolution
Waiting for nextcloud to start
nc: getaddrinfo for host "nextcloud-aio-nextcloud" port 9001: Temporary failure in name resolution
Waiting for nextcloud to start
nc: getaddrinfo for host "nextcloud-aio-nextcloud" port 9001: Temporary failure in name resolution
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
Waiting for nextcloud to start
++ nproc
+ THREADS=12
+ export THREADS
+ set +x
4:47PM INF env file found, loading environment variables from file envFile=.env
4:47PM INF Setting logging to info
4:47PM INF Starting LocalAI using 12 threads, with models path: /models
4:47PM INF LocalAI version: v2.17.1 (8142bdc48f3619eddc6344fa4ed83b331f7b37c2)
4:47PM INF Preloading models from /models
4:47PM INF core/startup process completed!
4:47PM INF LocalAI API is listening! Please connect to the endpoint for API documentation. endpoint=http://0.0.0.0:8080
4:47PM INF Success ip=127.0.0.1 latency=2.084412ms method=GET status=200 url=/readyz
4:48PM INF Success ip=172.20.0.12 latency=2.309518ms method=GET status=200 url=/v1/models
4:48PM INF Success ip=127.0.0.1 latency="183.877µs" method=GET status=200 url=/readyz
4:49PM INF Trying to load the model 'ggml-koala-7b-model-q4_0-r2.bin' with the backend '[llama-cpp llama-ggml gpt4all llama-cpp-fallback piper rwkv stablediffusion whisper huggingface bert-embeddings /build/backend/python/sentencetransformers/run.sh /build/backend/python/transformers-musicgen/run.sh /build/backend/python/openvoice/run.sh /build/backend/python/rerankers/run.sh /build/backend/python/vllm/run.sh /build/backend/python/petals/run.sh /build/backend/python/sentencetransformers/run.sh /build/backend/python/exllama2/run.sh /build/backend/python/transformers/run.sh /build/backend/python/autogptq/run.sh /build/backend/python/diffusers/run.sh /build/backend/python/parler-tts/run.sh /build/backend/python/vall-e-x/run.sh /build/backend/python/mamba/run.sh /build/backend/python/coqui/run.sh /build/backend/python/bark/run.sh /build/backend/python/exllama/run.sh]'
4:49PM INF [llama-cpp] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend llama-cpp
4:49PM INF [llama-cpp] attempting to load with AVX2 variant
4:49PM INF [llama-cpp] Fails: could not load model: rpc error: code = Canceled desc =
4:49PM INF [llama-ggml] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend llama-ggml
4:49PM INF [llama-ggml] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
4:49PM INF [gpt4all] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend gpt4all
4:49PM INF [gpt4all] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
4:49PM INF [llama-cpp-fallback] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend llama-cpp-fallback
4:49PM INF [llama-cpp-fallback] Fails: could not load model: rpc error: code = Canceled desc =
4:49PM INF [piper] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend piper
4:49PM INF [piper] Fails: could not load model: rpc error: code = Unknown desc = unsupported model type /models/ggml-koala-7b-model-q4_0-r2.bin (should end with .onnx)
4:49PM INF [rwkv] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend rwkv
4:49PM INF [rwkv] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
4:49PM INF [stablediffusion] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend stablediffusion
4:49PM INF [stablediffusion] Fails: could not load model: rpc error: code = Unknown desc = stat /models/ggml-koala-7b-model-q4_0-r2.bin: no such file or directory
4:49PM INF [whisper] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend whisper
4:49PM INF [whisper] Fails: could not load model: rpc error: code = Unknown desc = stat /models/ggml-koala-7b-model-q4_0-r2.bin: no such file or directory
4:49PM INF [huggingface] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend huggingface
4:49PM INF [huggingface] Fails: could not load model: rpc error: code = Unknown desc = no huggingface token provided
4:49PM INF [bert-embeddings] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend bert-embeddings
4:49PM INF [bert-embeddings] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
4:49PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/sentencetransformers/run.sh
4:49PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/transformers-musicgen/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/transformers-musicgen/run.sh
4:49PM INF [/build/backend/python/transformers-musicgen/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/openvoice/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/openvoice/run.sh
4:49PM INF [/build/backend/python/openvoice/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/rerankers/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/rerankers/run.sh
4:49PM INF [/build/backend/python/rerankers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/vllm/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/vllm/run.sh
4:49PM INF [/build/backend/python/vllm/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/petals/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/petals/run.sh
4:49PM INF [/build/backend/python/petals/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/sentencetransformers/run.sh
4:49PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/exllama2/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/exllama2/run.sh
4:49PM INF [/build/backend/python/exllama2/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/transformers/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/transformers/run.sh
4:49PM INF [/build/backend/python/transformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/autogptq/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/autogptq/run.sh
4:49PM INF [/build/backend/python/autogptq/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/diffusers/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/diffusers/run.sh
4:49PM INF [/build/backend/python/diffusers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/parler-tts/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/parler-tts/run.sh
4:49PM INF [/build/backend/python/parler-tts/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/vall-e-x/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/vall-e-x/run.sh
4:49PM INF [/build/backend/python/vall-e-x/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/mamba/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/mamba/run.sh
4:49PM INF [/build/backend/python/mamba/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/coqui/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/coqui/run.sh
4:49PM INF [/build/backend/python/coqui/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/bark/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/bark/run.sh
4:49PM INF [/build/backend/python/bark/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM INF [/build/backend/python/exllama/run.sh] Attempting to load
4:49PM INF Loading model 'ggml-koala-7b-model-q4_0-r2.bin' with backend /build/backend/python/exllama/run.sh
4:49PM INF [/build/backend/python/exllama/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
4:49PM ERR Server error error="could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model\n[gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model\n[llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = \n[piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/ggml-koala-7b-model-q4_0-r2.bin (should end with .onnx)\n[rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n[stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/ggml-koala-7b-model-q4_0-r2.bin: no such file or directory\n[whisper]: could not load model: rpc error: code = Unknown desc = stat /models/ggml-koala-7b-model-q4_0-r2.bin: no such file or directory\n[huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided\n[bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS" ip=127.0.0.1 latency=20.438205704s method=POST status=500 url=/v1/chat/completions
4:49PM INF Success ip=127.0.0.1 latency="128.572µs" method=GET status=200 url=/readyz

ran this suggested command as a test after getting errors from nextcloud assistant

root@30e52f2ad34c:/build# curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
  "model": "ggml-koala-7b-model-q4_0-r2.bin",
  "messages": [{"role": "user", "content": "Say this is a test!"}],
  "temperature": 0.7
}'
{"error":{"code":500,"message":"could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model\n[gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model\n[llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = \n[whisper]: could not load model: rpc error: code = Unknown desc = stat /models/ggml-koala-7b-model-q4_0-r2.bin: no such file or directory\n[stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/ggml-koala-7b-model-q4_0-r2.bin: no such file or directory\n[piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/ggml-koala-7b-model-q4_0-r2.bin (should end with .onnx)\n[rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n[huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided\n[bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model\n[/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS","type":""}}root@30e52f2ad34c:/build#