Closed alexander-brady closed 6 months ago
Hi @alexander-brady,
Good question. You'll need to use role tags (like system
, user
, and assistant
) for any model that requires a chat based API:
from guidance import system, user, assistant
with system():
lm = gpt + "You are a helpful assistant."
with user():
lm += "What is the meaning of life?"
with assistant():
lm += gen("response")
We have an example notebook detailing AzureOpenAI Chat usage here: https://github.com/guidance-ai/guidance/blob/5c9776e0aeffb64a5b4b96a794322c96b58b92b3/notebooks/api_examples/models/AzureOpenAI.ipynb
Complete portability between chat and non-chat based models is something we're still actively thinking about, so for now, you'll have to write chat-aligned guidance programs for chat based models. Let me know if you have any further questions!
Hi @Harsha-Nori,
Thanks for the quick response. I actually originally ran the code like you sent (as per the notebook), however I get this error:
Exception: You need to use a chat model in order the use role blocks like `with system():`! Perhaps you meant to use the AzureOpenAICompletionChat class?
I tried changing the model to be of the AzureOpenAICompletionChat
class, which returned the following error:
AttributeError: module 'guidance.models' has no attribute 'AzureOpenAICompletionChat'
@alexander-brady , what is the URI of the model you're using? And what is the ultimate type of your gpt
variable (it won't be AzureOpenAI
because Python)? The logic which determines the final class you get is:
https://github.com/guidance-ai/guidance/blob/e18be92f5348c1c33c1894b4ff5a805eda9d410c/guidance/models/_azure_openai.py#L61
@riedgar-ms, the URI is https://***.openai.azure.com/
Thus, the path doesn't end with /chat/completions
The type of the gpt
variable is guidance.models._azure_openai.AzureOpenAICompletion
@alexander-brady , I see that the AzureAI playground has changed the URI format in their Python code :-/
If you go to the playground, and get the curl
code, you should see a URI along the lines of
https://******.openai.azure.com/openai/deployments/*****/chat/completions?api-version=*****
That is the format we're currently expecting, but the class should be updated
I think this was fixed by @riedgar-ms in https://github.com/guidance-ai/guidance/pull/770 . If you install from source, hopefully this works now :).
Thanks for the fix. One small thing: Azure's OpenAI API uses a slightly different model naming convention for their 3.5 models than the OpenAI api, namely gpt-35-turbo
as opposed to gpt-3.5-turbo
, which is not reflected in the chat_model_pattern
regex. Updating the regex to change this however solves the issue.
@alexander-brady did you want to make a PR to fix the regex? Or I can put it on my ToDo list.
PR created.
I also handled the case where pathlib.Path(parsed_url.path).parts
doesn't contain a deployment name, just returning None
instead. Otherwise, running it got an IndexError: tuple index out of range
, as pathlib.Path(parsed_url.path).parts
was returning ('\\',)
for my Azure OpenAI endpoint.
The bug Guidance doesn't work using newer API versions of azure, as models such as 'gpt-35-turbo' require the chat completions api.
BadRequestError: Error code: 400 - {'error': {'code': 'OperationNotSupported', 'message': 'The completion operation does not work with the specified model, gpt-35-turbo. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.'}}
To Reproduce
System info (please complete the following information):
guidance.__version__
): 0.1.13