Open gzcqy opened 1 year ago
Could you solve it? I am stuck there too
try: var results = await api.Chat.CreateChatCompletionAsync(new ChatRequest() { //Model = Model.GPT4, Model = Model.ChatGPTTurbo0301, //Model = Model.DefaultModel, Temperature = temperature, MaxTokens = maxtoken, Messages = new ChatMessage[] { new ChatMessage(ChatMessageRole.User, prompt) } }); var reply = results.Choices[0].Message; return reply.Content;
LLM models created by OpenAI are divided into Chat models and Completion models, although they function basically the same. GPT4 and GPT3.5 are Chat models and GPT3 (DaVinci) and Instruct version of GPT3.5 are Completion Models. Although I get frustrated by it because it makes testing a bit more complex, I get that they kept it divided to not cause implementation issues in older solutions
var result = await api.Completions.CreateCompletionAsync(new CompletionRequest(prompt, model: Model.GPT4,max_tokens:maxtoken, temperature: temperature));
System.Net.Http.HttpRequestException:“Error at completions (https://api.openai.com/v1/completions) with HTTP status code: NotFound. Content: { "error": { "message": "This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?", "type": "invalid_request_error", "param": "model", "code": null } } ”