Open gregrobison opened 5 months ago
If I had to guess, it's likely that ollama-webui needs to be able to send the deployment_id
and api_version
in the requests for Azure endpoints, which it currently does not have settings variables for.
One workaround you could do for the time being is using LiteLLM proxy to connect to your Azure endpoint, and then connect ollama-webui to litellm. Here's their documentation on setting up Azure endpoints: https://litellm.vercel.app/docs/providers/azure
And here's an example docker-compose.yaml
I use to deploy both ollama-webui and litellm together in a stack:
version: '3.9'
services:
webui:
image: ghcr.io/ollama-webui/ollama-webui:main
environment:
- "OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL}"
- "OPENAI_API_BASE_URL=http://openai-proxy:8000/v1"
- "OPENAI_API_KEY=${LITELLM_API_KEY}"
ports:
- 3000:8080
volumes:
- ./ollama-webui/data:/app/backend/data
restart: unless-stopped
openai-proxy:
image: ghcr.io/berriai/litellm:main-latest
environment:
- "MASTER_KEY=${LITELLM_API_KEY}"
- "OPENAI_API_KEY=${OPENAI_API_KEY}"
- "MISTRAL_API_KEY=${MISTRAL_API_KEY}"
- "ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}"
ports:
- 4000:8000
volumes:
- ./litellm/config.yaml:/app/config.yaml
command: [ "--config", "/app/config.yaml", "--port", "8000" ]
restart: unless-stopped
@tjbck - Is the idea to provide support for other LLM endpoints via LiteLLM in the webUI or have an explicit option for providing Azure OpenAI endpoint + key ?
I created this Ansible Playbook to address this exact need. Please take a look: https://github.com/SpaceTerran/ollama-webui_and_liteLLM-proxy
I think a quick solution (temporary) could also be hardcoding the API version which gets added if the url contains the Azure domain. This would skip the need to update the UI and probably can be done with a few lines of code.
I think a quick solution (temporary) could also be hardcoding the API version which gets added if the url contains the Azure domain. This would skip the need to update the UI and probably can be done with a few lines of code.
Personally I wouldn't be against doing this in my own code but I think we can strive for higher here. If it's to be modified, is it much more additional work to do it right? Because I can see the issue reports now from that one guy that uses a different API version 😆
I thought I'd write here in case others find it useful.
I was able to get Open WebUI (v0.1.118) to use Azure OpenAI, gpt4. I did this through litellm (1.35.5), my docker compose file is just a combination of the four services: openwebui, postgresql, ollama, litellm. However one important gotcha is your litellm deployment cannot have a "LITELLM_MASTER_KEY" defined. Took me awhile to figure out. I'm still not sure how to get Open WebUI to use litellm with a LITELLM_MASTER_KEY. I'm still trying to determine if this problem is:
Same for me, I use OpenAI on azure for a while now. No issue at all with the liteLLM integration.
@Zulban @Michelklingler How do you make Azure Dall-E 3 work via LiteLLM in WebUI? The Image setting does not provide an input field to specify the model.
@Michelklingler I am currently trying to integrate some Azure OpenAI models with the liteLLM integration but somehow does not get it to work. Does my litellm config looks like this, but I am probably missing something. How did you get it to work?
model_list:
- model_name: gpt-4
litellm_params:
model: azure/test-gpt-4-turbo
api_base: https://<azure-open-ai-service>.openai.azure.com/
api_key: "<redacted>"
@Michelklingler I am currently trying to integrate some Azure OpenAI models with the liteLLM integration but somehow does not get it to work. Does my litellm config looks like this, but I am probably missing something. How did you get it to work?
model_list: - model_name: gpt-4 litellm_params: model: azure/test-gpt-4-turbo api_base: https://<azure-open-ai-service>.openai.azure.com/ api_key: "<redacted>"
Hey @jakoberpf !
Below is my config, works like a charm with Mistral-large and GPT4-Turbo
model_list:
- litellm_params:
api_base: https://OpenAiEndpoint.openai.azure.com/
api_key: xxx
model: azure/gpt4turbo
model_info:
id: f609d5af-7d43-45dc-81dd-b9c26212e259
model_name: "Azure - GPT4-Turbo"
- litellm_params:
api_base: https://Mistral-large-serverless.eastus2.inference.ai.azure.com/v1
api_key: xxx
model: mistral/Mistral-large
model_info:
id: ccaa5796-64f3-4691-b4a6-0598d2450ec7
model_name: "Azure - Mistral-large"
I slightly change the link of the endpoint and removed the API key, but should work following this template!
Let me know if you get it work! M.
@Zulban @Michelklingler How do you make Azure Dall-E 3 work via LiteLLM in WebUI? The Image setting does not provide an input field to specify the model.
Mmm this is actually a good, point, not sure that at this point LiteLLM as it is configured can manage pulling an image from an endpoint, this is above my understanding of the code. But I would be interested to know as well.
@Michelklingler This worked great thank you. Do you know what the model_info
is actually referencing?
thx @Michelklingler, that works really well. 👍🏻
thx @Michelklingler, that works really well. 👍🏻
@Michelklingler This worked great thank you. Do you know what the
model_info
is actually referencing?
@Michelklingler Can you let us know where to get the model_id param?
@Michelklingler This worked great thank you. Do you know what the
model_info
is actually referencing?@Michelklingler Can you let us know where to get the model_id param?
Hi!
I'm not actually completely sure what the model info is coming from. I think it is a value that auto populate when I use the WebUI interface to add new model.
For some reason I think it is a value that is automatically generated by liteLLM. I think as long as this value is different for every liteLLM model it's fine it's only for LiteLLM to identify each model internally.
It's my assumption. Let me know if it works!
I successfully used the project at https://github.com/haibbo/cf-openai-azure-proxy to convert Azure OpenAI to OpenAI, and it has been functioning perfectly.
Implemented as Pipeline: https://github.com/open-webui/pipelines/blob/main/examples/providers/azure_openai_pipeline.py
tried the pipeline and find hit error 500. Further checked the request body JSON is not match with Azure OpenAI ChatCompletion API schema.
tried the pipeline and find hit error 500. Further checked the request body JSON is not match with Azure OpenAI ChatCompletion API schema.
Same here, is there a way to fix it ?
@TDaubignyElsan @kentsuiGitHub I don't use Azure myself (I don't have the keys to test it) so PR welcome!
@TDaubignyElsan @kentsuiGitHub I don't use Azure myself (I don't have the keys to test it) so PR welcome!
Related documentation is located just here : https://learn.microsoft.com/en-us/azure/ai-services/openai/reference if it can help ?
Related documentation is located just here : https://learn.microsoft.com/en-us/azure/ai-services/openai/reference if it can help ?
The issue isn't with finding documentation, it's the inability for the team to test it since we do not have access. We could be open to someone offering us a limited scope key via Discord PM to move this forward.
Related documentation is located just here : https://learn.microsoft.com/en-us/azure/ai-services/openai/reference if it can help ?
The issue isn't with finding documentation, it's the inability for the team to test it since we do not have access. We could be open to someone offering us a limited scope key via Discord PM to move this forward.
I can help with that, if someone wants to shoot me dm. on discord you can also find me under Gyarbij.
Is your feature request related to a problem? Please describe. I am unable to get my Azure OpenAI API endpoint to be recognized - are there any additional settings needed?
Describe the solution you'd like Azure OpenAI endpoint cannot connect.
Describe alternatives you've considered API works in other environments.
Additional context