Open FellowTraveler opened 1 month ago
I'm not sure off-hand if there's some other problem here, but if it got to 5,000,000 characters and you want it to continue, you can configure this value in config.toml, for example:
max_chars=10000000
or as few or as many as you like. Do you think it's too few by default?
I'm not sure off-hand if there's some other problem here, but if it got to 5,000,000 characters and you want it to continue, you can configure this value in config.toml, for example:
max_chars=10000000
or as few or as many as you like. Do you think it's too few by default?
Is there a way to modify that in docker? Im very new to docker.
Well if it's going through the headers 1-by-1 and diagramming them, and since the header itself appears in the log I posted, I guess my question becomes:
Do you really think that header above in the log is 5,000,000 chars long?
EDIT: This might explain how I spent $20 last night on the API diagramming headers...
Is there a way to modify that in docker?
Pass -e MAX_CHARS=10000000
Well if it's going through the headers 1-by-1 and diagramming them, and since the header itself appears in the log I posted, I guess my question becomes:
Do you really think that header above in the log is 5,000,000 chars long?
EDIT: This might explain how I spent $20 last night on the API diagramming headers...
max_chars keeps count of all calls to the API during a session, not just one message. So if this is step 42, it includes a count of characters in the other steps too and their history. In other words, it counts your successive contexts sent to the openai API. Also, at least in some cases, I think it also includes stuff done before, e.g. when you are asked if you want to resume the same session and successfully do so (it's using the same session and continues to count), but I should say it's not clear to me this last one was the case here or not.
The intention of this max_chars
variable check is precisely to allow users to control cost in some way. It might not be a perfect way... it's one way.
We actually have a newer feature more on point, I think, max_budget_per_task
. New enough to me too that I haven't actually tried it yet. π
Its meaning is exactly what you'd think: the maximum budget (in USD) allowed per task, beyond which the agent will stop.
Yeah budget is usually more useful, but it's only accurate (or maybe applicable) to several models, e.g. OpenAI models. max_chars
gives you more control but that doesn't directly translate to dollar values.
It seems like crashing isn't the best response to this 'error'.
Should create a Budget class which is able to set budget by token, as well as by time, as well as by number of tries. (If an agent runs out of time, or tries 3 times, or runs out of tokens...)
Is there an existing issue for the same bug?
Describe the bug
This isn't a problem for me but I just thought you'd want to know. I was using OpenDevin to help me document / diagram C++ header files, going through them one by one, when it hit this MaxCharsExceedError.
Current OpenDevin version
Installation and Configuration
Model and Agent
-Model: OpenAI GPT4o -Agent: Default, which I think is CodeAct.
Operating System
I'm sure it's running in linux in that docker, but my laptop is a Silicon Mac (M3 Max).
Reproduction Steps
Grab the opentxs library from github and tell the agent to start going through the headers in the include folder, and tell it to produce concise summaries and mermaid diagrams for everything. Then it will do the headers one by one, and you can keep prompting it to continue until you spend about $20 on the OpenAI API. Then the error will occur. P.S. I had to restart it once or twice, which is why the log only says $6. But trust me, you aren't getting off that cheap.
Logs, Errors, Screenshots, and Additional Context