Doriandarko / o1-engineer

o1-engineer is a command-line tool designed to assist developers in managing and interacting with their projects efficiently. Leveraging the power of OpenAI's API, this tool provides functionalities such as code generation, file editing, and project planning to streamline your development workflow.
2.61k stars 264 forks source link

Large projects fall out of context #15

Closed DavidKotykOfficial closed 2 days ago

DavidKotykOfficial commented 4 days ago

o1 engineer is thinking... Error while communicating with OpenAI: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, you requested 131447 tokens (71447 in the messages, 60000 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}} ERROR:root:Error while communicating with OpenAI: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, you requested 131447 tokens (71447 in the messages, 60000 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

kalebzaki4 commented 3 days ago

It seems you've encountered an error related to the model's context length limit (the maximum number of tokens it can handle in one request). The error indicates that your request exceeded the allowed token limit.

Here’s how you can address this issue:

  1. Reduce the number of tokens in the message: The model accepts up to 128,000 tokens, but your request had 131,447 tokens. Try reducing the content of the message by splitting it into smaller parts or removing unnecessary information.
  2. Restructure your request: If you are working with a large dataset or code, you may need to break the input into smaller chunks and process them in steps rather than sending everything at once.
  3. Summarize or filter irrelevant content: If possible, summarize or filter out unnecessary details from the input. This will help keep the token count within the limit without losing important information. By adjusting the message size or splitting the request, you should be able to resolve the error.
Doriandarko commented 2 days ago

Yes this unfortunately a system constraint, you need to reduce the amount of content you send. Max is 128k