victordibia / llmx

An API for Chat Fine-Tuned Large Language Models (llm)
MIT License
75 stars 31 forks source link

No support for Azure OpenAI #1

Closed Marvin-Vitcu closed 1 year ago

Marvin-Vitcu commented 1 year ago

Hello Victor :)

I really like what you are doing and want to use the recently released lida library for projects in my organization. However, my organization uses Azure OpenAI. For this we need to be able to specfiy three openai properties:

Here is an example authorization for an Azure OpenAI instance:

import openai
openai.api_type = "azure"
openai.api_base = "https://yourendpoint.openai.azure.com/"
openai.api_version = "2023-07-01-preview"
openai.api_key = os.getenv("OPENAI_API_KEY")

Can you please add these properties to llmx so it is possible to use lida?

Best regards Marvin

victordibia commented 1 year ago

Hey Marvin,

Thanks for the note. I started a pr.

Will merge once I get a chance to test behaviors in more detail.

-V.

victordibia commented 1 year ago

Support for azure OAI is now implemented.

Example usage is

azure_openai_gen = llm(
    provider="openai",
    api_type="azure",
    api_base=os.environ["AZURE_OPENAI_BASE"],
    api_key=os.environ["AZURE_OPENAI_API_KEY"],
    api_version="2023-07-01-preview",
)
openai_config = TextGenerationConfig(model="gpt-35-turbo-v0301", use_cache=True)
openai_response = azure_openai_gen.generate(messages, config=openai_config)
print(openai_response.text[0].content)
ishaan-jaff commented 1 year ago

Hi @victordibia I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm

TLDR: We allow you to use any LLM as a drop in replacement for gpt-3.5-turbo. If you don't have access to the LLM you can use the LiteLLM proxy to make requests to the LLM

You can use LiteLLM in the following ways:

With your own API KEY:

This calls the provider API directly

from litellm import completion
import os
## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-key" # 
os.environ["COHERE_API_KEY"] = "your-key" # 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

Using the LiteLLM Proxy with a LiteLLM Key

this is great if you don’t have access to claude but want to use the open source LiteLLM proxy to access claude

from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
victordibia commented 1 year ago

Hey Ishaan,

Your library is great and does way more stuff than llmx is trying to do at the moment. Thanks for sharing it, as I did not know about it previously!

In terms of the current issue, this is already implemented in llmx.

V.

krrishdholakia commented 1 year ago

@victordibia why do you feel like it does more than you need?

nuaabuaa07 commented 10 months ago

Support for azure OAI is now implemented.

Example usage is

azure_openai_gen = llm(
    provider="openai",
    api_type="azure",
    api_base=os.environ["AZURE_OPENAI_BASE"],
    api_key=os.environ["AZURE_OPENAI_API_KEY"],
    api_version="2023-07-01-preview",
)
openai_config = TextGenerationConfig(model="gpt-35-turbo-v0301", use_cache=True)
openai_response = azure_openai_gen.generate(messages, config=openai_config)
print(openai_response.text[0].content)

I try to connect to my Azure endpoint ,by the mothod above but get error 。I try to test my url ,resourcename by rest request and It is ok .Is there some mistake in my code?

image
nuaabuaa07 commented 10 months ago

Support for azure OAI is now implemented. Example usage is

azure_openai_gen = llm(
    provider="openai",
    api_type="azure",
    api_base=os.environ["AZURE_OPENAI_BASE"],
    api_key=os.environ["AZURE_OPENAI_API_KEY"],
    api_version="2023-07-01-preview",
)
openai_config = TextGenerationConfig(model="gpt-35-turbo-v0301", use_cache=True)
openai_response = azure_openai_gen.generate(messages, config=openai_config)
print(openai_response.text[0].content)

I try to connect to my Azure endpoint ,by the mothod above but get error 。I try to test my url ,resourcename by rest request and It is ok .Is there some mistake in my code? image

I finally know that if I want to use Azure , the "model" params in TextGenerationConfig must be my "deployment_name" in Azure.