Closed diberry closed 6 months ago
There is some connection between the deployment and the model. If I have no deployments, I get a 400, if I have a deployment such as the following image, I can create the assistant, the thread, and the message with an object such as
const options: AssistantCreationOptions = {
model: "gpt-35-turbo-1106",
name: "Math Tutor",
instructions: "You are a personal math tutor. Write and run JavaScript code to answer math questions.",
tools: [{ type: "code_interpreter" } as ToolDefinition],
};
This connection isn't clear from the reference documentation for the model which describes it as The ID of the model to use.
Where is the documentation which describes where to get the model value?
This connection isn't clear from the reference documentation for the model which describes it as
The ID of the model to use.
Where is the documentation which describes where to get the model value?
I'll modify the PR to update the model doc string to clarify that it corresponds to the "Deployment name" in AOAI Studio
Describe the bug
I based my code on the OpenAI Assistants sample in order to create the quickstart for the documentation. The sample uses the model "gpt-4-1106-preview". I can see model gpt-4 with version 1106-preview so I think it is the same thing and so my resource should have the model. When I run the code, I get 400 model not found.
To Reproduce Steps to reproduce the behavior:
Expected behavior Found model or information about why model is incorrect. Since this isn't an issue of deployment/deployment name, why can't the model be an enum so I don't have to deal with strings. Can the error say something like model not enabled in resource or model not found in resource to distinguish these two issues?
Screenshots If applicable, add screenshots to help explain your problem.
Additional context More enums please.