Closed Marvin-Vitcu closed 1 year ago
Hey Marvin,
Thanks for the note. I started a pr.
Will merge once I get a chance to test behaviors in more detail.
-V.
Support for azure OAI is now implemented.
Example usage is
azure_openai_gen = llm(
provider="openai",
api_type="azure",
api_base=os.environ["AZURE_OPENAI_BASE"],
api_key=os.environ["AZURE_OPENAI_API_KEY"],
api_version="2023-07-01-preview",
)
openai_config = TextGenerationConfig(model="gpt-35-turbo-v0301", use_cache=True)
openai_response = azure_openai_gen.generate(messages, config=openai_config)
print(openai_response.text[0].content)
Hi @victordibia I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm
TLDR:
We allow you to use any LLM as a drop in replacement for gpt-3.5-turbo
.
If you don't have access to the LLM you can use the LiteLLM proxy to make requests to the LLM
You can use LiteLLM in the following ways:
This calls the provider API directly
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-key" #
os.environ["COHERE_API_KEY"] = "your-key" #
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages)
this is great if you don’t have access to claude but want to use the open source LiteLLM proxy to access claude
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages)
Hey Ishaan,
Your library is great and does way more stuff than llmx is trying to do at the moment. Thanks for sharing it, as I did not know about it previously!
In terms of the current issue, this is already implemented in llmx.
V.
@victordibia why do you feel like it does more than you need?
Support for azure OAI is now implemented.
Example usage is
azure_openai_gen = llm( provider="openai", api_type="azure", api_base=os.environ["AZURE_OPENAI_BASE"], api_key=os.environ["AZURE_OPENAI_API_KEY"], api_version="2023-07-01-preview", ) openai_config = TextGenerationConfig(model="gpt-35-turbo-v0301", use_cache=True) openai_response = azure_openai_gen.generate(messages, config=openai_config) print(openai_response.text[0].content)
I try to connect to my Azure endpoint ,by the mothod above but get error 。I try to test my url ,resourcename by rest request and It is ok .Is there some mistake in my code?
Support for azure OAI is now implemented. Example usage is
azure_openai_gen = llm( provider="openai", api_type="azure", api_base=os.environ["AZURE_OPENAI_BASE"], api_key=os.environ["AZURE_OPENAI_API_KEY"], api_version="2023-07-01-preview", ) openai_config = TextGenerationConfig(model="gpt-35-turbo-v0301", use_cache=True) openai_response = azure_openai_gen.generate(messages, config=openai_config) print(openai_response.text[0].content)
I try to connect to my Azure endpoint ,by the mothod above but get error 。I try to test my url ,resourcename by rest request and It is ok .Is there some mistake in my code?
I finally know that if I want to use Azure , the "model" params in TextGenerationConfig must be my "deployment_name" in Azure.
Hello Victor :)
I really like what you are doing and want to use the recently released lida library for projects in my organization. However, my organization uses Azure OpenAI. For this we need to be able to specfiy three openai properties:
Here is an example authorization for an Azure OpenAI instance:
Can you please add these properties to llmx so it is possible to use lida?
Best regards Marvin