Mintplex-Labs / anything-llm

The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
https://anythingllm.com
MIT License
25.46k stars 2.58k forks source link

Ollama connection with `Basic` authorization #2213

Open xieu90 opened 2 months ago

xieu90 commented 2 months ago

hi, usually ollama runs on local machine together with anythingllm so the url can be http://localhost:11434. I have hosted ollama on a server and secured it. now i can access it https://ollama.someserver.de. using firefox i can access it via https://username:password@ollama.someserver.de doing so with anythingllm then error will appear: Could not respond to message. Request cannot be constructed from a URL that includes credentials: https://username:password@ollama.someserver.de/api/chat

are there anyway to get it work with anythingllm ?

timothycarambat commented 1 month ago

We do not have Ollama's request automatically set the head for basic Authorization requests since Ollama itself does not support authentication so build so would be a use case-specific integration.

I can understand the need to secure your Ollama instance running else where, but all LLM integrations that we support which do have authorization all use a Bearer type and not Basic.

xieu90 commented 1 month ago

if i can inject the Bearer token then it might work. My other question is assume i managed to setup bearer token in server and it can later check/authenticate then how can i put the bearer in AnythingLLM to send it to server?
image in postman or bruno sometimes i see the header or body text area to put bearer there, but in anythingllm according to image i have no idea how to put bearer into it (yet)

flefevre commented 1 month ago

I have same difficulty with Ollama and Litellm connector. I have deployed both and i was trying to setp Anythingllm desktop version to link to those servers with https://ollama-server.mylaborary.fr and https://litellm-server.mylaboratory.fr

but it seems that AnythingLLM desktop do not allow to use https servers?

timothycarambat commented 1 month ago

We do not limit http/https - that is not the issue. However, if you are using LiteLLM to relay the connection you can use the LiteLLM connector, which will use their input/output formatting. Regardless, your likely issue is port configuration. HTTPS is 443 and ollama runs on :11434 and unless you mapped 443 to forward to 11434, that is why the connection fails and is unrelated to this issue and is a configuration issue

xieu90 commented 1 month ago

just for your info:

I tried to put token/password as parameter into url https://ollama.someserver.de/?token=specialtoken

If token is same to the one i put in server before then it will let me through and i see ollama is running message in firefox. If there is no token or wrong one is present then i will get 403 forbidden error in firefox.

so i went and put that https://ollama.someserver.de/?token=specialtoken into Ollama Base URL like in previous screenshot. Later in the chat I said hi and got this Could not respond to message. Ollama call failed with status code 403: 403 Forbidden

403 Forbidden


nginx

flefevre commented 1 month ago

Dear Xieu90, thanks you for your analysis.

i do confirm i am using Traefik to proxy the ollama and litellm, for instance: https://litellm-server.mylaboratory.fr >> http://litellm-server:8001

Anythingllm is able normally to take the APIKey, which i have configure by default with sk-1234. If i use web version of AnythingLLM installed on the same server, i am able to configure with http://litellm-server:8001 and the APIkey. if i try to use https://litellm-server.mylaboratory.fr in the web version, i am not able to reach the model.

If i use the standalone version of AnythingLLM installed on a different laptop, I am not able to configure with https. https://docs.litellm.ai/docs/proxy/token_auth

I will need more time to test

here some server logs when i use https with APIkey

2024-09-11T15:15:20.557342331Z [backend] error: LiteLLM:listModels Connection error.

979 2024-09-11T15:15:30.948545453Z [backend] error: Error: The OPENAI_API_KEY environment variable is missing or empty; either provide it, or instantiate the OpenAI client with an apiKey option, like new OpenAI({ apiKey: 'My API Key' }).

980 2024-09-11T15:15:30.948598508Z at new OpenAI (/app/server/node_modules/openai/index.js:53:19)

981 2024-09-11T15:15:30.948607932Z at openAiModels (/app/server/utils/helpers/customModels.js:59:18)

982 2024-09-11T15:15:30.948615457Z at getCustomModels (/app/server/utils/helpers/customModels.js:27:20)

983 2024-09-11T15:15:30.948622718Z at /app/server/endpoints/system.js:897:41

984 2024-09-11T15:15:30.948629739Z at Layer.handle [as handle_request] (/app/server/node_modules/express/lib/router/layer.js:95:5)

985 2024-09-11T15:15:30.948636782Z at next (/app/server/node_modules/express/lib/router/route.js:149:13)

986 2024-09-11T15:15:30.948643487Z at validateMultiUserRequest (/app/server/utils/middleware/validatedRequest.js:94:3)

987 2024-09-11T15:15:30.948650317Z at async validatedRequest (/app/server/utils/middleware/validatedRequest.js:9:12)

988 2024-09-11T15:15:41.036856983Z [backend] error: LiteLLM:listModels Connection error.

989 2024-09-11T15:15:50.833079387Z [backend] info: [Event Logged] - update_llm_provider

990 2024-09-11T15:15:53.756603864Z [backend] error: LiteLLM:listModels Connection error.

991 2024-09-11T15:15:56.374296126Z [backend] info: [Event Logged] - update_llm_provider

992 2024-09-11T15:16:02.206412276Z [backend] info: [Event Logged] - update_llm_provider

993 2024-09-11T15:16:04.783272068Z [backend] error: LiteLLM:listModels Connection error.

994 2024-09-11T15:16:15.133423154Z [backend] info: [Event Logged] - update_llm_provider

995 2024-09-11T15:19:30.103815730Z [backend] info: [Event Logged] - update_llm_provider

996 2024-09-11T15:19:32.924807411Z [backend] error: LiteLLM:listModels Connection error.

997 2024-09-11T15:20:31.244942267Z [backend] info: [Event Logged] - update_llm_provider

998 2024-09-11T15:21:06.228618284Z [backend] info: [Event Logged] - update_llm_provider

999 2024-09-11T15:21:32.790569159Z [backend] info: [Event Logged] - update_llm_provider

1000 2024-09-11T15:21:39.560893164Z [backend] info: [Event Logged] - update_llm_provider

On Wed, Sep 11, 2024 at 4:40 PM xieu90 @.***> wrote:

just for your info:

I tried to put token/password as parameter into url https://ollama.someserver.de/?token=specialtoken

If token is same to the one i put in server before then it will let me through and i see ollama is running message in firefox. If there is no token or wrong one is present then i will get 403 forbidden error in firefox.

so i went and put that https://ollama.someserver.de/?token=specialtoken into Ollama Base URL like in previous screenshot. Later in the chat I said hi and got this Could not respond to message. Ollama call failed with status code 403: 403 Forbidden 403 Forbidden

nginx

— Reply to this email directly, view it on GitHub https://github.com/Mintplex-Labs/anything-llm/issues/2213#issuecomment-2343873787, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABKZRFES5HDVLAYYIBKCVATZWBI57AVCNFSM6AAAAABNS5VYWWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGNBTHA3TGNZYG4 . You are receiving this because you commented.Message ID: @.***>

--

François Le Fèvre 36 rue Jean Poulmarch 91190 Gif-sur-Yvette 0665604928