All-Hands-AI / OpenHands

🙌 OpenHands: Code Less, Make More
https://all-hands.dev
MIT License
31.27k stars 3.61k forks source link

[Bug]: Agent Requests Above Maximum Token Limit #2888

Open wTaylorBickelmann opened 2 months ago

wTaylorBickelmann commented 2 months ago

Is there an existing issue for the same bug?

Describe the bug

Not sure if this is better understood as a bug or feature request, but I was using OpenDevin when I got the following error in the logging

litellm.exceptions.ContextWindowExceededError: litellm.BadRequestError: litellm.ContextWindowExceededError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 8192 tokens. However, you requested 8641 tokens (4545 in the messages, 4096 in the completion). Please reduce the length of the messages or completion.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

It seems like this could be solvable through chunking whenever the context window is exceeded (and maybe that is the intent?)

Current OpenDevin version

ghcr.io/opendevin/opendevin:latest (which on 7/10/24 i think would be 0.7.1)

Installation and Configuration

WORKSPACE_BASE=$(pwd)/workspace
docker run -it \
    --pull=always \
    -e SANDBOX_USER_ID=$(id -u) \
    -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
    -v $WORKSPACE_BASE:/opt/workspace_base \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    --name opendevin-app-$(date +%Y%m%d%H%M%S) \
    ghcr.io/opendevin/opendevin

Model and Agent

gpt4 CodeActAgent

Operating System

WSL

Reproduction Steps

I asked it to fix a flask program

Logs, Errors, Screenshots, and Additional Context

error_log.txt

SmartManoj commented 2 months ago

2021 will solve.