During the development of new prompts (context: pe5sF9-2UY) and experiments with the GPT-4 API, it has come to my attention that the 'finish_reason' parameter can sometimes return the value "length," indicating that the AI had more to say but was cut off due to token limitations. This scenario leads to incomplete responses, which can cause confusion for users due to the generic nature of the error message currently being displayed.
The purpose of this issue is to enhance the user experience by providing clearer, more informative error messages when the 'finish_reason' parameter indicates an incomplete output or other errors. By doing so, we aim to give users a better understanding of why their response was truncated and what actions they can take to remedy the situation.
Delving deeper, the parameter finish_reason can yield these responses:
stop: API returned complete message, or a message terminated by one of the stop sequences provided via the stop parameter
length: Incomplete model output due to max_tokens parameter or token limit
function_call: The model decided to call a function
content_filter: Omitted content due to a flag from our content filters
null: API response still in progress or incomplete
During the development of new prompts (context: pe5sF9-2UY) and experiments with the GPT-4 API, it has come to my attention that the 'finish_reason' parameter can sometimes return the value "length," indicating that the AI had more to say but was cut off due to token limitations. This scenario leads to incomplete responses, which can cause confusion for users due to the generic nature of the error message currently being displayed.
The purpose of this issue is to enhance the user experience by providing clearer, more informative error messages when the 'finish_reason' parameter indicates an incomplete output or other errors. By doing so, we aim to give users a better understanding of why their response was truncated and what actions they can take to remedy the situation.
Delving deeper, the parameter
finish_reason
can yield these responses:stop
: API returned complete message, or a message terminated by one of the stop sequences provided via the stop parameterlength
: Incomplete model output due to max_tokens parameter or token limitfunction_call
: The model decided to call a functioncontent_filter
: Omitted content due to a flag from our content filtersnull
: API response still in progress or incompleteiOS issue: https://github.com/woocommerce/woocommerce-ios/issues/13259