Open Stargate256 opened 11 months ago
Hi, thanks for the detailed report.
What do you do on the Nextcloud side? Are you using the "text generation" smart picker provider? If so it's weird that a request to /v1/chat/completions
is done by NC. This should only happen if the model name starts with gpt-
. In your case it should make a request to /v1/completions
.
Which project exactly are you using to create the web service?
LocalAI can use llama.cpp via /v1/chat/completions
(https://localai.io/model-compatibility/llama-cpp/) so I assume you're using something else.
I don't have a good internet connection right now, I'll try to play with LocalAI and llama models next week. If we can identify clear cases the integration app could be adjusted to request different endpoints depending on the selected model.
Are you using the "text generation" smart picker provider? - yes
Which project exactly are you using to create the web service? - llama.cpp server with
I am facing the same problem.
I have LocalAI running in a virtual machine separate from the Nextcloud server.
Nextcloud version:"27.1.2" OpenAI and LocalAI integration: 1.0.13 LocalAI: v1.30.0
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3b789d3b04d1 quay.io/go-skynet/local-ai:latest "/build/entrypoint.s…" 21 minutes ago Up 20 minutes (healthy) 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp localai-api-1
From the Nextcloud machine, the curl command in the CLI confirmed that LocalAI was working.
Model information is also obtained as shown in the following screenshot.
/v1/completions
is requested.This is the log output from the LocalAI side with Debug.
@julien-nc
This is like you are saying.
https://github.com/nextcloud/integration_openai/issues/44#issuecomment-1719096751
This is due to the fact that /completions
is called if the model name does not start with 'gpt-'.
I am using a non-'gpt-' model in LocalAI, so it is /completions
.
But, since /chat/completions
is already available in LocalAI, this condition is no longer necessary.
📖 Text generation (GPT) :: LocalAI documentation https://localai.io/features/text-generation/
This test query works fine.
curl http://172.16.xxx.yyy:8080/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "luna-ai-llama2",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
Ok, I think we'll set the default endpoint to /chat/completions and add a setting to specify the endpoint manually just in case some is still using an old version of LocalAI.
Hi, i have a same issue but on NC hosted on Hetzner server. I run an API AI via Ollama on a public url (es: https://something1234.com) when I try to add in LocalAI Url i notice a "404 not found /v1/models" on AI server log. But in main interface of this plugin I haven't a model menu to choose from (as Ynott user posted before). Any idea?
I switched from llamacpp to koboldcpp(https://github.com/LostRuins/koboldcpp) and the now the nextcloud LocalAI integration works.
Is there a temporary workaround for this?
Is there a temporary workaround for this?
I mean, maybe koboldcpp? I'm not having any luck with that one, personally.
open-webui isn't working for me either, but my errors look a bit different than described above: https://help.nextcloud.com/t/ollama-integration-with-nexcloud/180302/6
LocalAI does work for me, FWIW. Have you tried that? Their "AIO" images are super handy: https://localai.io/basics/container/
⚠️ This issue respects the following points: ⚠️
Bug description
I am trying to use the local text generation integration with llama.cpp
When I try to generate text I get: OpenAI error: Client error:
POST http://192.168.73.78:8000/v1/chat/completions
resulted in a404 Not Found
response: File Not FoundThe problem is that llama.cpp uses: "http://192.168.73.78:8000//completion"
Steps to reproduce
Expected behavior
It should generate text.
Installation method
Community Manual installation with Archive
Nextcloud Server version
27
Operating system
Debian/Ubuntu
PHP engine version
PHP 8.2
Web server
Apache (supported)
Database engine version
MySQL
Is this bug present after an update or on a fresh install?
None
Are you using the Nextcloud Server Encryption module?
None
What user-backends are you using?
Configuration report
List of activated Apps
No response
Nextcloud Signing status
No response
Nextcloud Logs
No response
Additional info
No response