Open LarsVanderwee2002 opened 1 month ago
Hey @LarsVanderwee2002! I'm here to assist you with any bugs, questions, or contributions you have. Just give me a shout if you need help!
Yes, you can adapt the existing API client functions to use the Azure OpenAI API instead of the OpenAI API. Here is an example of how you might do this:
import requests
def call_openai_api(api_key, messages, model="gpt-4o-mini-2024-07-18", n=2):
url = "https://api.openai.com/v1/chat/completions"
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json",
"User-Agent": "AsyncOpenAI/Python 1.44.1"
}
data = {
"messages": messages,
"model": model,
"n": n
}
response = requests.post(url, headers=headers, json=data)
return response.json()
def call_azure_openai_api(api_key, messages, model="gpt-4o-mini-2024-07-18", n=2):
url = "https://<your-azure-endpoint>/openai/deployments/<deployment-id>/completions?api-version=2022-12-01"
headers = {
"api-key": api_key,
"Content-Type": "application/json"
}
data = {
"messages": messages,
"model": model,
"n": n
}
response = requests.post(url, headers=headers, json=data)
return response.json()
# Example usage
api_key = "your_api_key"
messages = [{"role": "user", "content": [{"type": "text", "text": "Hello, how are you?"}]}]
# Call OpenAI API
openai_response = call_openai_api(api_key, messages)
print(openai_response)
# Call Azure OpenAI API
azure_response = call_azure_openai_api(api_key, messages)
print(azure_response)
In this example, the call_openai_api
function interacts with the OpenAI API, while the call_azure_openai_api
function is adapted to interact with the Azure OpenAI API by changing the URL and headers accordingly [1].
To continue talking to Dosu, mention @dosu.
did anyone figure out how to get this working with azure openai? Here is what i have found - in llms.py:
It appears that we may need to extend this to accept azure api_base, api_key, model and anything else needed by azure.
pl. let me know if someone builds this new package. I cannot build it in my current situation...
I actually figured out the way to make it work. You can pass a config for all the LLMs used (embedding, summary_llm, llm and agent). One has to pass each config in the following form for each llm listed above:
SUMMARY_LLM = {
"model_list":
[
{
"model_name":"azure/your-chosen-name,
"litellm_params":{
"model": "azure/name-of-your-deployment",
"api_key": your-api-key,
"api_base": your-api-base,
"api_version" : api-version-of-deployment
}
}
]
}
When using the Python module one can simply simply create a Settings object with the configs and pass that when calling the ask function of the module(I used a separate config.py containing all the configs needed):
settings = Settings(
agent=AgentSettings(agent_llm=model_config.AGENT["model_list"][0]["model_name"],
agent_llm_config=model_config.AGENT,
agent_type="ToolSelector"),
summary_llm=model_config.SUMMARY_LLM["model_list"][0]["model_name"],
summary_llm_config=model_config.SUMMARY_LLM,
embedding=model_config.EMBEDDING["model_list"][0]["model_name"],
embedding_config=model_config.EMBEDDING,
llm=model_config.LLM["model_list"][0]["model_name"],
llm_config=model_config.LLM,
paper_directory = paper_directory,
index_directory = index_directory
)
And then pass this settings object when calling the ask function:
answer = ask(
"The question to ask",
settings=self.settings
)
Is there a way to use the azure open AI API instead of the open ai api. E.g by somehow providing an azure open AI API ?