Open shaxxx opened 1 week ago
Tried with 0.13 version, only difference is now agent informs me in chat window that there was an error. I even tried with alternative settings, other model work, gpt-4o-mini doesn't Cline works without any problems with same model.
Thanks very much for bringing this issue up. Can you try the main
version as well and see if it works there?
docker run -it --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:main-nikolaik \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
-e LOG_ALL_EVENTS=true \
--add-host host.docker.internal:host-gateway \
--name openhands-app \
docker.all-hands.dev/all-hands-ai/openhands:main
There was a change that added some support for other models and I'd like to see if it works here too.
I had to modified the startup code to properly escape it for bash shell in docker and added few extra parameters
docker run -it --pull=always \
-e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:main-nikolaik \
-e SANDBOX_USER_ID=$(id -u) \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-e LOG_ALL_EVENTS=true \
-e DEBUG=1 \
-e SANDBOX_TIMEOUT=120 \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v //var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name openhands-app-$(date +%Y%m%d%H%M%S) \
docker.all-hands.dev/all-hands-ai/openhands:main
here's console log https://pastebin.com/raw/wiwCwfgn
here's openhands log https://pastebin.com/raw/j75riK9F
Still not working.
Yeah something is up with this model. I see this in the logs, not sure if this is what causes the error:
|14:38:18 - openhands:DEBUG: action_execution_server.py:167 - Action output:
|**ErrorObservation**
|File not found: /workspace/.gitignore. Your current working directory is /workspace.
Let me see if I can get someone to take a look at this.
It's not, I tried creating valid .gitignore file, still fails. But setting litellm outpu to verbose is giving the real error
Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'
2024-11-12 17:30:39 RAW RESPONSE: 2024-11-12 17:30:39 {"id": null, "choices": null, "created": null, "model": null, "object": null, "service_tier": null, "system_fingerprint": null, "usage": null, "error": {"message": "Provider returned error", "code": 400, "metadata": {"raw": "{\n \"error\": {\n \"message\": \"Invalid parameter: messages with role 'tool' must be a response to a preceeding message with 'tool_calls'.\",\n \"type\": \"invalid_request_error\",\n \"param\": \"messages.[3].role\",\n \"code\": null\n }\n}", "provider_name": "OpenAI"}}} 2024-11-12 17:30:39 2024-11-12 17:30:39 2024-11-12 17:30:39 openai.py: Received openai error - 2024-11-12 17:30:39 RAW RESPONSE: 2024-11-12 17:30:39 2024-11-12 17:30:39 2024-11-12 17:30:39 2024-11-12 17:30:39 2024-11-12 17:30:39 Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new 2024-11-12 17:30:39 LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'. 2024-11-12 17:30:39 2024-11-12 17:30:39 Traceback (most recent call last): 2024-11-12 17:30:39 File "/app/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 854, in completion 2024-11-12 17:30:39 raise e 2024-11-12 17:30:39 File "/app/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 805, in completion 2024-11-12 17:30:39 return convert_to_model_response_object( 2024-11-12 17:30:39 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-11-12 17:30:39 File "/app/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/llm_response_utils/convert_dict_to_response.py", line 366, in convert_to_model_response_object 2024-11-12 17:30:39 raise raised_exception 2024-11-12 17:30:39 Exception 2024-11-12 17:30:39
Apologies for the ping @xingyaoww but user seems to have an issue with GPT-4o-mini specifically. Is it that the model doesn't support something?
@shaxxx In your last screenshot, it shows o1-mini
, not got-4o-mini
. Is o1-mini
the one that doesn't work for you?
With the current main
, I don't seem to reproduce the original issue with 4o, gpt-4o-mini
works.
@shaxxx In your last screenshot, it shows
o1-mini
, notgot-4o-mini
. Iso1-mini
the one that doesn't work for you?With the current
main
, I don't seem to reproduce the original issue with 4o,gpt-4o-mini
works.
I apologize for mixing up screenshots. o1-mini works gpt-4o-mini doesn't work
I've deleted all the containers, recreated them from main (there were some changes since docker needed to re-download image parts), build new containers without workspace bindings and still the same. Here's the clean log output for build todo app with vue prompt with litellm.set_verbose=True and all openhands debugging on. This is as good as it gets. https://pastebin.com/raw/LZu3k7xz
@shaxxx You're right, I can reproduce it now. Can you please for now, add -e AGENT_FUNCTION_CALLING=false
to the docker command in the README?
@shaxxx You're right, I can reproduce it now. Can you please for now, add
-e AGENT_FUNCTION_CALLING=false
to the docker command in the README?
I can confirm it doesn't crash with additional settings. Well, kind of, vue todo app prompt did throw error
500 Server Error: Internal Server Error for url: http://host.docker.internal:31691/execute_action
but it was working for a while normally making me think this is another kind of (unrelated) error with container dependencies/setup and can be ignored.
Is there an existing issue for the same bug?
Describe the bug and reproduction steps
When
openrouter/openai/gpt-4o-mini
model is entered in Advanced optionsCustom model
field, OpenHands will return error in console and will not respond to prompt. If I enter any other model (leaving all other settings unchanged) it will work as expected (ie.openrouter/openai/o1-mini
). I can reproduce it with any prompt (ie. "Say hello"). And yes, I've triple checked the key, and again, other models work without changing the key And why there is no 'openai/gpt-4o-mini' option in the openrouter models? It's price and ranking (currently #1 in "Programming/scripting category" makes it default option for most of the tasks.OpenHands Installation
Docker command in README
OpenHands Version
0.12
Operating System
WSL on Windows
Logs, Errors, Screenshots, and Additional Context
And here are my settings