Open KishenKumar27 opened 9 months ago
Hi @KishenKumar27, thanks for pointing this out. I'm noticing a few issues we've got in our AzureOpenAI
classes, but I think the principal issue is that Azure deployments can have any user-specified name which means that we can't look at the name of the string and detect the appropriate tokenizer to initialize from tiktoken
. In this case, for example, tiktoken's naming is gpt-3.5-turbo
not gpt-35-turbo
(note the extra '.'). However I think this problem is bigger than just detecting '.' characters, since you could have named your deployment anything.
I'm going to continue investigating this -- and leave this issue open in the interim -- but for now, you should pass in a tokenizer manually:
from guidance import user, assistant, models
import tiktoken
enc = tiktoken.encoding_for_model("gpt-3.5-turbo")
azureai_model = models.AzureOpenAIChat(
model="gpt-35-turbo",
azure_endpoint=azure_endpoint,
api_key=api_key,
api_version=api_version,
temperature=0.1,
tokenizer=enc, # Manually pass in tokenizer for the model corresponding to your deployment.
)
I'm hopeful that we can use some library function to detect the underlying model from an AzureOpenAI deployment (@riedgar-ms, any thoughts?), but haven't found a way to do so upon a quick search. Hopefully I'm just missing something simple! Thanks again for reporting this!
Just fyi @Harsha-Nori there is a bug where if you initialize AzureOpenAIChat then it will not use the passed tokenizer since it hardcodes a request to tiktoken.
The trivial fix is to change line 109 in the AzureOpenAI file to
tokenizer=tokenizer or tiktoken.encoding_for_model(model),
Here is a PR https://github.com/guidance-ai/guidance/pull/641
btw, one could also use AzureOpenAI function calling to mirror the use of grammars, are y'all interested in that?
Good catch -- merged the PR in :).
btw, one could also use AzureOpenAI function calling to mirror the use of grammars, are y'all interested in that?
Do you mind expanding on this? I don't think we can leverage it for totally arbitrary grammars, but I can see how we could leverage it for e.g. JSON grammars.
The bug KeyError: 'Could not automatically map gpt-35-turbo to a tokeniser. Please use
tiktok.get_encoding
to explicitly get the tokeniser you expect.'To Reproduce python==3.11.5 openai==1.12.0 guidance==0.1.10 model_name=gpt-35-turbo