Closed DavidKotykOfficial closed 2 days ago
It seems you've encountered an error related to the model's context length limit (the maximum number of tokens it can handle in one request). The error indicates that your request exceeded the allowed token limit.
Here’s how you can address this issue:
Yes this unfortunately a system constraint, you need to reduce the amount of content you send. Max is 128k
o1 engineer is thinking... Error while communicating with OpenAI: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, you requested 131447 tokens (71447 in the messages, 60000 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} ERROR:root:Error while communicating with OpenAI: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, you requested 131447 tokens (71447 in the messages, 60000 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}