valentinfrlch / ha-llmvision

Let Home Assistant see!
Apache License 2.0
285 stars 12 forks source link

Support Azure Open AI #64

Open pantherale0 opened 2 months ago

pantherale0 commented 2 months ago

Is your feature request related to a problem? Please describe. Currently this integration does not support Azure OpenAI which is works slightly differently to a standard OpenAI custom server.

Integration setup fails with Could not connect to the server. Check you API key or IP and port

Describe the solution you'd like Support for Azure OpenAI endpoints.

Describe alternatives you've considered Using a proxy to proxy the requests, however this solution is not as clean as a native solution.

Additional context N/A

valentinfrlch commented 2 months ago

Will look into this

valentinfrlch commented 2 weeks ago

Have you tried using the custom OpenAI compatible option?

pantherale0 commented 2 weeks ago

Hi,

Yes, however it doesn't work with the above error logged. Example endpoint URL:

https://{resource_name}.openai.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2023-03-15-preview

valentinfrlch commented 2 weeks ago

Ah sorry you wrote that initially. Is that still the case? I updated the custom OpenAI provider in the last update.

pantherale0 commented 2 weeks ago

No problem, yes, its still the case, tried it again today.

xiasi0 commented 1 week ago

86 Our situation seems to be the same. Have you solved it yet?

pantherale0 commented 1 week ago

I believe the API itself is actually slightly different to the standard OpenAI spec which is why it doesn't work. But #86 will most likely use the same service as the PAYG production instances (that I'm trying to use).

xiasi0 commented 1 week ago

I believe the API itself is actually slightly different to the standard OpenAI spec which is why it doesn't work. But #86 will most likely use the same service as the PAYG production instances (that I'm trying to use).

I forcibly modified the BASE URL address for OpenAI Conversation core integration, using url: https://models.inference.ai.azure.com It is available

But LLM Vision cannot.😂

valentinfrlch commented 1 week ago

Looking at Azure OpenAI REST API Reference it is pretty obvious why it doesn't work. Azure OpenAI is not compatible with OpenAI's API. The endpoints are somewhat similar (/chat/completions vs /{deployment_id}/completions), but the actual request body is completely different.

OpenAI:

{
     "model": "gpt-4o-mini",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
}

Azure:

{
 "prompt": [
  "tell me a joke about mango"
 ],
 "max_tokens": 32,
 "temperature": 1.0,
 "n": 1
}

For now I can only recommend using OpenAI directly instead, or if you're looking for a free provider, I believe Gemini has a free tier.

xiasi0 commented 1 week ago

Looking at Azure OpenAI REST API Reference it is pretty obvious why it doesn't work. Azure OpenAI is not compatible with OpenAI's API. The endpoints are somewhat similar ( vs ), but the actual request body is completely different./chat/completions``/{deployment_id}/completions

OpenAI:

{
     "model": "gpt-4o-mini",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
}

Azure:

{
 "prompt": [
  "tell me a joke about mango"
 ],
 "max_tokens": 32,
 "temperature": 1.0,
 "n": 1
}

For now I can only recommend using OpenAI directly instead, or if you're looking for a free provider, I believe Gemini has a free tier.

If it's worth the effort. I hope it can be compatible because Gemini is not available in my region. My GPU no longer has any more VRAM, so I cannot use LLM Vision locally. I cannot afford more bills and more VRAM GPUs.

pantherale0 commented 1 week ago

@valentinfrlch is there a reason OpenAI's python package isn't used to facilitate communication to OpenAI based LLMs?

valentinfrlch commented 1 week ago

I figured, since there are some other providers I might as well just use http requests for all providers as this is easier to maintain. I am in the process of rewriting the request_helpers.py though.

Is there a reason why the python package should be used?

pantherale0 commented 1 week ago

I see.

The package contains support for both standard OpenAI providers (including self hosted/custom environments), and also Azure environments. It shares a common set of functions and exceptions between the two. Generally considered best practice to use SDK' s for a given language if the company/service offers one.

kiloptero commented 1 week ago

same here.. Im not able to use the azure open ai.. Will be great to be Able to use it.