Open JeremyAlain opened 3 months ago
Similar issue:
{"type":"error","error":{"type":"invalid_request_error","message":"messages: all messages must have non-empty content
except for the optional final assistant message"}}
Yeah, when Claude3 works with open-interpreter it's amazing, but errors like the following are all too frequent. I can't tell exactly what's going on, but in the middle of the stacktrace, there's:
litellm.llms.anthropic.AnthropicError:
{"type":"error","error":{"type":"authentication_error","message":"invalid x-api-key"}}
This is strange, because open-interpreter has successfully started generating output with Claude's help.
Further down the trace, there's this:
litellm.exceptions.BadRequestError: AnthropicException -
{"type":"error","error":{"type":"invalid_request_error","message":"messages: all messages must have
non-empty content except for the optional final assistant message"}}
Describe the bug
I am running open interpreter with the new claude 3 model on a google colab.. Here are the parameters.
interpreter.llm.model = "claude-3-opus-20240229" interpreter.llm.context_window = 200000 interpreter.llm.max_tokens = 4000
The model runs fine until it starts executing code. For instance it writes some code to simulate an unbiased coinflip, it executes it, the result of the code (i.e. the output of the python function) actually gets printed out, but then the whole program just blocks and does not continue.
If run the same thing but with gpt-4 it works flawlessly.
Reproduce
interpreter.llm.model = "claude-3-opus-20240229" interpreter.llm.context_window = 200000 interpreter.llm.max_tokens = 4000
interpreter.chat()
type "Run a simulation of 1000 unbiased coinflips and report the result."
Expected behavior
Run code, and then report the result to the user who can then pass new comands.
Screenshots
No response
Open Interpreter version
0.2.0
Python version
3.10.12
Operating System name and version
google colab
Additional context
No response