When using the ChatGPT model GPT-4o for generating product descriptions, we've encountered an issue where the JSON response is truncated, indicated by "finish_reason": "length". This suggests that the AI had more information to provide but was cut off due to the max_tokens limit set in the API call. More context here: pe5sF9-2UY
Steps to Reproduce:
Send a detailed prompt to the ChatGPT API using the GPT-4o model.
Observe the JSON response received, particularly looking for "finish_reason": "length".
Note the incomplete content and generic error message.
Changes
Add the parameter max_tokens, aligning with the new 4000 token threshold.
Problem:
When using the ChatGPT model GPT-4o for generating product descriptions, we've encountered an issue where the JSON response is truncated, indicated by
"finish_reason": "length"
. This suggests that the AI had more information to provide but was cut off due to themax_tokens
limit set in the API call. More context here: pe5sF9-2UYSteps to Reproduce:
"finish_reason": "length"
.Changes
max_tokens
, aligning with the new 4000 token threshold.