Closed apfelcast closed 1 month ago
Hi @apfelcast thanks for the report. I had a look and could reproduce the issue. It looks like the chosen model for text completion is not correctly set in the database:
php occ config:list integration_openai
{
"apps": {
"integration_openai": {
"types": "",
"chat_endpoint_enabled": "1",
"installed_version": "2.0.3",
"enabled": "yes",
"url": "http:\/\/nextcloud-aio-local-ai:8080"
}
}
}
(it misses the "default_completion_model_id": "the model"
value).
Because of that, it tries to use the default model DEFAULT_COMPLETION_MODEL_ID = 'gpt-3.5-turbo';.
Unfortunately, one cannot even set a value in the admin integration_openai settings as any default model like e.g. ggml-gpt4all-j.bin is pre-selected and cannot be removed nor updated and thus is not updated in the backend.
This is a bug in the integration_openai app and needs to be reported there. https://github.com/nextcloud/integration_openai/issues
Just FYI: I informed the responsible team internally about the bug and hope that it fixes it soon. However it is probably indeed useful if you open an issue for this in the linked repo.
Hi @szaimen thanks for debugging this issue! I will open an issue at the linked repo. Is there a work-around for the issue at the moment?
Hi @szaimen thanks for debugging this issue! I will open an issue at the linked repo.
Thanks!
Is there a work-around for the issue at the moment?
In theory you should be able to set this value via occ config:app:set but I have not found the correct command for this yet...
Steps to reproduce
Expected behavior
Get the response from inside Nextcloud Assistant
Actual behavior
Get no response, only error message.
Other information
Host OS
Debian 12
Output of
sudo docker info
Client: Docker Engine - Community Version: 27.2.1 Context: default Debug Mode: false Plugins: buildx: Docker Buildx (Docker Inc.) Version: v0.16.2 Path: /usr/libexec/docker/cli-plugins/docker-buildx compose: Docker Compose (Docker Inc.) Version: v2.29.2 Path: /usr/libexec/docker/cli-plugins/docker-compose
Server: Containers: 10 Running: 9 Paused: 0 Stopped: 1 Images: 10 Server Version: 27.2.1 Storage Driver: overlay2 Backing Filesystem: extfs Supports d_type: true Using metacopy: false Native Overlay Diff: true userxattr: false Logging Driver: json-file Cgroup Driver: systemd Cgroup Version: 2 Plugins: Volume: local Network: bridge host ipvlan macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local splunk syslog Swarm: inactive Runtimes: io.containerd.runc.v2 runc Default Runtime: runc Init Binary: docker-init containerd version: 7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c runc version: v1.1.14-0-g2c9f560 init version: de40ad0 Security Options: apparmor seccomp Profile: builtin cgroupns Kernel Version: 6.1.0-25-amd64 Operating System: Debian GNU/Linux 12 (bookworm) OSType: linux Architecture: x86_64 CPUs: 8 Total Memory: 9.711GiB Name: nextcloud-aio ID: 4ebfaf03-0b57-49df-9253-c47a5f2ff5e8 Docker Root Dir: /var/lib/docker Debug Mode: false Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false
Docker run command or docker-compose file that you used
docker run \ --init \ --sig-proxy=false \ --name nextcloud-aio-mastercontainer \ --restart always \ --publish 8080:8080 \ --env APACHE_PORT=11000 \ --env APACHE_IP_BINDING=0.0.0.0 \ --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \ --volume /var/run/docker.sock:/var/run/docker.sock:ro \ --env AIO_COMMUNITY_CONTAINERS="local-ai" \ nextcloud/all-in-one:latest
Other valuable info
Error Log after an AI request inside the Nextcloud log
{"reqId":"KJk30ZFwuaM29YA0x3xq","level":3,"time":"2024-09-19T16:32:36+00:00","remoteAddr":"192.168.50.101","user":"admin","app":"no app in context","method":"GET","url":"/apps/assistant/chat/generate?sessionId=1","message":"LanguageModel call using provider LocalAI failed","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.6 Safari/605.1.15","version":"29.0.6.1","exception":{"Exception":"RuntimeException","Message":"OpenAI/LocalAI request failed: Fehler bei der API-Anfrage:could not load model - all backends returned error: [llama-cpp]: could not load model: rpc error: code = Canceled desc = \n[llama-ggml]: could not load model: rpc error: code = Unknown desc = failed loading model\n[gpt4all]: could not load model: rpc error: code = Unknown desc = failed loading model\n[llama-cpp-fallback]: could not load model: rpc error: code = Canceled desc = \n[piper]: could not load model: rpc error: code = Unknown desc = unsupported model type /models/gpt-3.5-turbo (should end with .onnx)\n[rwkv]: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF\n[stablediffusion]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo: no such file or directory\n[whisper]: could not load model: rpc error: code = Unknown desc = stat /models/gpt-3.5-turbo: no such file or directory\n[huggingface]: could not load model: rpc error: code = Unknown desc = no huggingface token provided\n[bert-embeddings]: could not load model: rpc error: code = Unknown desc = failed loading model\n[/build/backend/python/mamba/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/diffusers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/parler-tts/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/parler-tts/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/bark/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vllm/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/sentencetransformers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/petals/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/autogptq/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/openvoice/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/openvoice/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/rerankers/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/rerankers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/vall-e-x/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/coqui/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/exllama2/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS\n[/build/backend/python/transformers-musicgen/run.sh]: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS","Code":0,"Trace":[{"file":"/var/www/html/lib/public/TextProcessing/Task.php","line":103,"function":"process","class":"OCA\OpenAi\TextProcessing\FreePromptProvider","type":"->","args":["This is a conversation in a specific language between admin and you, Nextcloud Assistant. You are a kind, polite and helpful AI that helps admin to the best of its abilities. If you do not understand something, you will ask for clarification. Detect the language that admin is using. Make sure to use the same language in your response. Do not mention the language explicitly.\nHallo\nassistant: "]},{"file":"/var/www/html/lib/private/TextProcessing/Manager.php","line":136,"function":"visitProvider","class":"OCP\TextProcessing\Task","type":"->","args":[["OCA\OpenAi\TextProcessing\FreePromptProvider"]]},{"file":"/var/www/html/custom_apps/assistant/lib/Controller/ChattyLLMController.php","line":444,"function":"runTask","class":"OC\TextProcessing\Manager","type":"->","args":[["OCP\TextProcessing\Task"]]},{"file":"/var/www/html/custom_apps/assistant/lib/Controller/ChattyLLMController.php","line":278,"function":"queryLLM","class":"OCA\Assistant\Controller\ChattyLLMController","type":"->","args":["This is a conversation in a specific language between admin and you, Nextcloud Assistant. You are a kind, polite and helpful AI that helps admin to the best of its abilities. If you do not understand something, you will ask for clarification. Detect the language that admin is using. Make sure to use the same language in your response. Do not mention the language explicitly.\nHallo\nassistant: "]},{"file":"/var/www/html/lib/private/AppFramework/Http/Dispatcher.php","line":232,"function":"generateForSession","class":"OCA\Assistant\Controller\ChattyLLMController","type":"->","args":[1]},{"file":"/var/www/html/lib/private/AppFramework/Http/Dispatcher.php","line":138,"function":"executeController","class":"OC\AppFramework\Http\Dispatcher","type":"->","args":[["OCA\Assistant\Controller\ChattyLLMController"],"generateForSession"]},{"file":"/var/www/html/lib/private/AppFramework/App.php","line":184,"function":"dispatch","class":"OC\AppFramework\Http\Dispatcher","type":"->","args":[["OCA\Assistant\Controller\ChattyLLMController"],"generateForSession"]},{"file":"/var/www/html/lib/private/Route/Router.php","line":331,"function":"main","class":"OC\AppFramework\App","type":"::","args":["OCA\Assistant\Controller\ChattyLLMController","generateForSession",["OC\AppFramework\DependencyInjection\DIContainer"],["assistant.chattyllm.generateforsession"]]},{"file":"/var/www/html/lib/base.php","line":1058,"function":"match","class":"OC\Route\Router","type":"->","args":["/apps/assistant/chat/generate"]},{"file":"/var/www/html/index.php","line":49,"function":"handleRequest","class":"OC","type":"::","args":[]}],"File":"/var/www/html/custom_apps/integration_openai/lib/TextProcessing/FreePromptProvider.php","Line":46,"message":"LanguageModel call using provider LocalAI failed","exception":[],"CustomMessage":"LanguageModel call using provider LocalAI failed"},"id":"66ec55054996f"}