Closed crypticatul closed 6 days ago
Unfortunately, at this time, all providers are hardcoded. It would be nice to have them support this through a config file. I got Azure working but had to add it in a lot of places in both the backend and frontend. Basically, it's pretty much like OpenAI but you have to use a different llama_index client. example in chat.py --> get_llm
...
elif model == ChatModel.AZURE:
return AzureOpenAI(
api_version=os.environ["AZURE_OPENAI_API_VERSION"],
deployment_name=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"],
model=os.environ["AZURE_OPENAI_MODEL"],
api_key=os.environ["AZURE_OPENAI_API_KEY"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
)
...
Also you will have to implement your Instructor to support the related queries.
related_queries.py > instructor_client
...
elif model == ChatModel.AZURE:
return from_azureopenai(openai.AsyncAzureOpenAI(
api_version=os.environ["AZURE_OPENAI_API_VERSION"],
azure_deployment=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT_NAME"],
api_key=os.environ["AZURE_OPENAI_API_KEY"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
))
...
Hey, I added support for azure! Let me know if you have any trouble setting it up!
Hi, thanks for adding the support for Azure. I just have a few silly queries; In env vars
I'm using the below env vars and still getting error - OPENAI_API_KEY environment variable not found
Hi,
I want to utilize my Azure OpenAI API for this, can you add and option for the same? GPT-3.5-Turbo, and GPT-4o will be enough as an option. I am deploying on Vercel.
Thanks.