load1n9 / openai

Unofficial Deno wrapper for the Open Ai api
MIT License
73 stars 23 forks source link

Add types for handling errors? #10

Closed tkafka closed 1 year ago

tkafka commented 1 year ago

I noticed that when there is an error with Open AI API, the response from createChatCompletion looks like this:


{
  error: {
    message: "That model is currently overloaded with other requests. You can retry your request, or contact us through our help center at help.openai.com if the error persists. (Please include the request ID <redacted> in your message.)", // string
    type: "server_error", // string
    param: null, // any?
    code: null, // string | null
  },
}```

Could you please add the types for the error response to the API?
tkafka commented 1 year ago

Another example of an error:

{
  error: {
    message: "This model's maximum context length is 4097 tokens. However, your messages resulted in 4144 tokens. Please reduce the length of the messages.",
    type: "invalid_request_error",
    param: "messages",
    code: "context_length_exceeded",
  },
}
load1n9 commented 1 year ago

will do

lino-levan commented 1 year ago

I'm on the fence about adding types

lino-levan commented 1 year ago

I think we should instead throw the error with the information