Not sure if this is better understood as a bug or feature request, but I was using OpenDevin when I got the following error in the logging
litellm.exceptions.ContextWindowExceededError: litellm.BadRequestError: litellm.ContextWindowExceededError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, you requested 8641 tokens (4545 in the messages, 4096 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}
It seems like this could be solvable through chunking whenever the context window is exceeded (and maybe that is the intent?)
Current OpenDevin version
ghcr.io/opendevin/opendevin:latest (which on 7/10/24 i think would be 0.7.1)
Is there an existing issue for the same bug?
Describe the bug
Not sure if this is better understood as a bug or feature request, but I was using OpenDevin when I got the following error in the logging
It seems like this could be solvable through chunking whenever the context window is exceeded (and maybe that is the intent?)
Current OpenDevin version
Installation and Configuration
Model and Agent
gpt4 CodeActAgent
Operating System
WSL
Reproduction Steps
I asked it to fix a flask program
Logs, Errors, Screenshots, and Additional Context
error_log.txt