Open simonkurtz-MSFT opened 4 months ago
Hi @pamelafox & @kristapratico,
This is how the OpenAI Priority Load Balancer integrates. Nevermind the hard-coded backend and the location of the backends list in this PR. I don't intend to ask for a merge, but this was the best way to give you an idea of the setup.
If you have two AOAI instances with the same model, you can plug them both in and should see load-balancing.
I brought up two AOAI instances and related assets and configured both instances as backends in app.py
. Then I started to have a conversation.
Both backends are responding. It's important to note that this is not a uniform distribution because available backends are randomized (have to do so as part of multi-process workloads).
At no point did the conversation break down or showed any kind of error through the chat bot.
Cool! I made a few changes to the PR to make it a little easier to test out, by actually making the additional backend deployment, mind if I push them to the branch?
I think we should mention this option in the Productionizing guide, and if there are multiple customers wanting to use this approach, we could consider integrating it into main as an option.
Here are what my usage graphs look like during a load test btw:
Cool! I made a few changes to the PR to make it a little easier to test out, by actually making the additional backend deployment, mind if I push them to the branch?
I think we should mention this option in the Productionizing guide, and if there are multiple customers wanting to use this approach, we could consider integrating it into main as an option.
Hi Pamela, please do push! I very much welcome your expertise and improvements. If there are aspects of the 1.0.9 package itself that should/need to be improved, I'm all ears there, too, of course.
Thank you so much! I know this is extraordinary time spent.
Here are what my usage graphs look like during a load test btw:
Help me understand your test results, please. Are you hitting different backends or just different models?
@simonkurtz-MSFT Those graphs were for two different OpenAI instances in the same region.
@simonkurtz-MSFT Could you send a separate PR adding a mention of this approach to https://github.com/Azure-Samples/azure-search-openai-demo/blob/main/docs/productionizing.md#openai-capacity with a link to this PR? You could contrast when someone might opt for this over ACA/APIM (presumably cost/complexity).
Hi @pamelafox, could I trouble you for another review of this PR, please? Thank you very much for all your help!
This PR introduces the openai-priority-loadbalancer as a native Python option to target one or more Azure OpenAI endpoints. Among the features of the load-balancer are:
Retry-After
headers returned from Azure OpenAI to trigger a temporary open circuit for that endpoint.Retry-After
header value will be the lowest / soonest of all backends to facilitate a very likely successful retry by the OpenAI Python API Library as soon as possible.Relevant links:
This PR can be merged after @pamelafox's approval.