Open twilight2001 opened 1 month ago
i got it working by adding this chunk of code in base.py and making a custom_llm block, works not pretty because azure does not have model parameter, only deployment name
elif host == 'azure':
# Get the API key and endpoint from environment variables
azure_openai_key = os.environ.get("AZURE_OPENAI_API_KEY")
azure_openai_endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
if not azure_openai_key or not azure_openai_endpoint:
raise ValueError("AZURE_OPENAI_API_KEY and AZURE_OPENAI_ENDPOINT environment variables must be set.")
# Create the OpenAIClient instance
client = AzureOpenAI(
api_key=azure_openai_key,
api_version="2024-02-15-preview",
azure_endpoint=azure_openai_endpoint,
deployment_name=model # Replace with your deployment name
)
Does adding your custom_llm of the form llm = custom_llm into strict_json() directly solve this OpenAI key issue?
Example code (Refer to Tutorial 0)
### Put in your Azure OpenAI keys here ###
AZURE_OPENAI_KEY = ""
AZURE_OPENAI_ENDPOINT = ""
def azureOpenAiChat(
system_prompt: str, user_prompt: str, model: str = "GPT3_5Turbo", temperature: float = 0, **kwargs
) -> str:
azure_open_ai_client = AzureOpenAI(
api_key = AZURE_OPENAI_KEY,
azure_endpoint = AZURE_OPENAI_ENDPOINT,
api_version = "2024-02-15-preview"
)
response = azure_open_ai_client.chat.completions.create(
model=model,
messages=[{"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}],
temperature=temperature,
**kwargs,
)
return response.choices[0].message.content
def azure_chatgpt(system_prompt: str, user_prompt: str) -> str:
return azureOpenAiChat(system_prompt=system_prompt, user_prompt=user_prompt, model="GPT3_5Turbo", temperature=0)
Run the Custom LLM
res = strict_json(system_prompt = 'You are a classifier',
user_prompt = 'It is a beautiful and sunny day',
output_format = {'Sentiment': 'Type of Sentiment',
'Adjectives': 'Array of adjectives',
'Words': 'Number of words'},
llm = azure_chatgpt) # set this to your own LLM
print(res)
Dear John, Yes it does, I didn't watch tutorial#0, works great ! cheers GLY
Hi John and team, thanks for creating the package and I see the advantages but since we're in unsupported countries we must use Azure Openai, I cannot get the LLM function working due to following error, which seems source code only support OPENAI_API_KEY, or if there're other workaround kindly let me know :) THANKS !
ERROR OpenAIError Traceback (most recent call last) Cell In[31], line 1 ----> 1 res = strict_json(system_prompt = 'You are a classifier', 2 user_prompt = 'It is a beautiful and sunny day', 3 output_format = {'Sentiment': 'Type of Sentiment', 4 'Adjectives': 'List of adjectives', 5 'Words': 'Number of words'}) 7 print(res) ....
---> 98 raise OpenAIError( 99 "The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable" 100 ) 101 self.api_key = api_key 103 if organization is None:
OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
LLM FUNCTOIN CALL def custom_llm(system_prompt: str, user_prompt: str):
ensure your LLM imports are all within this function