All-Hands-AI / OpenHands

🙌 OpenHands: Code Less, Make More
https://all-hands.dev
MIT License
31.3k stars 3.61k forks source link

[Bug]: bash: --: invalid option error message #3558

Open CD1010 opened 3 weeks ago

CD1010 commented 3 weeks ago

Is there an existing issue for the same bug?

Describe the bug

Running the startup docker command throws a bash error

Current OpenHands version

0.8 docker

Installation and Configuration

I have docker installed on win 10.
I ran this inside of a conda ubuntu image on wsl

Ollama is running in open-webui.  That works.

export WORKSPACE_BASE=$(pwd)/workspace
docker run -it  -e SANDBOX_USER_ID=$(id -u) -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE -v $WORKSPACE_BASE:/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3011:3000 --add-host host.docker.internal:host-gateway --name openhands-app-$(date +%Y%m%d%H%M%S) ghcr.io/opendevin/opendevin:0.8 --add-host host.docker.internal:host-gateway -e LLM_API_KEY="ollama" -e LLM_BASE_URL="http://host.docker.internal:11434" -e LLM_OLLAMA_BASE_URL="http://host.docker.internal:11434"

Model and Agent

No response

Operating System

windows 10

Reproduction Steps

Run the above docker command get error bash --: invalid option

Logs, Errors, Screenshots, and Additional Context

2024-08-23 09:54:29 Starting OpenDevin... 2024-08-23 09:54:29 Setting up enduser with id 1000 2024-08-23 09:54:29 Docker socket group id: 1001 2024-08-23 09:54:29 Creating group with id 1001 2024-08-23 09:54:30 Running as enduser 2024-08-23 09:54:30 bash: --: invalid option 2024-08-23 09:54:30 Usage: bash [GNU long option] [option] ... 2024-08-23 09:54:30 bash [GNU long option] [option] script-file ... 2024-08-23 09:54:30 GNU long options: 2024-08-23 09:54:30 --debug 2024-08-23 09:54:30 --debugger 2024-08-23 09:54:30 --dump-po-strings 2024-08-23 09:54:30 --dump-strings 2024-08-23 09:54:30 --help 2024-08-23 09:54:30 --init-file 2024-08-23 09:54:30 --login 2024-08-23 09:54:30 --noediting 2024-08-23 09:54:30 --noprofile 2024-08-23 09:54:30 --norc 2024-08-23 09:54:30 --posix 2024-08-23 09:54:30 --pretty-print 2024-08-23 09:54:30 --rcfile 2024-08-23 09:54:30 --restricted 2024-08-23 09:54:30 --verbose 2024-08-23 09:54:30 --version 2024-08-23 09:54:30 Shell options: 2024-08-23 09:54:30 -ilrsD or -c command or -O shopt_option (invocation only) 2024-08-23 09:54:30 -abefhkmnptuvxBCEHPT or -o option

neubig commented 3 weeks ago

Hi @CD1010 , we just made an upgrade to version 0.9.0. Could you please run with the new docker command on the README?

WORKSPACE_BASE=$(pwd)/workspace
docker run -it \
    --pull=always \
    -e SANDBOX_USER_ID=$(id -u) \
    -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
    -v $WORKSPACE_BASE:/opt/workspace_base \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    --name openhands-app-$(date +%Y%m%d%H%M%S) \
    ghcr.io/all-hands-ai/openhands:0.9
CD1010 commented 3 weeks ago

I still got errors till i added the env variables -e LLM_BASE_URL="http://host.docker.internal:11434" -e LLM_OLLAMA_BASE_URL="http://host.docker.internal:11434"

Now it shows code in chat window, but i don't see any activity in the terminal or anything appearing in workspace I asked it to save to workspace, but nothing happened. What is expected behavior? Thanks

CD1010 commented 3 weeks ago

image

tobitege commented 3 weeks ago

With local models it is most likely an issue with the context window being quite small. If your model has a context window of less than 8-10K tokens (my personal guess here), than our system prompts might get cut off and thus the model won't know about the methods to use for file handling.

CD1010 commented 3 weeks ago

I tried using my chatgpt platform account. changed to gp4 and now it is sort of working but i get rate limit errors now 14:44:36 - openhands:ERROR: llm.py:120 - litellm.RateLimitError: RateLimitError: OpenAIException - Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-4o in organization org-h62hU0YgGsYQfwiJ4qeZsYM7 on tokens per min (TPM): Limit 10000, Used 6626, Requested 4919. Please try again in 9.27s. Visit https://platform.openai.com/account/rate-limits to learn more.', 'type': 'tokens', 'param': None, 'code': 'rate_limit_exceeded'}}. Attempt #1 | You can customize these settings in the configuration.

I could not get local ollama to work, even when setting max tokens to 16384, which it supports.

Not liking using a paid model for testing.

tobitege commented 3 weeks ago

I could not get local ollama to work, even when setting max tokens to 16384, which it supports.

Which models did you try?

Not liking using a paid model for testing.

I can relate. You could try OpenRouter, which currently has this model for free (for a limited time, I guess; just need to set up an account and key. Can't say if it requires to also set up a payment option, though): https://openrouter.ai/models/nousresearch/hermes-3-llama-3.1-405b If you try that, keep me posted if/how that worked.

CD1010 commented 3 weeks ago

I used gpt4o to test. It eventually generated code, but the built in browser was unresponsive. The page loaded it just would not respond I'll try openrouter

CD1010 commented 2 weeks ago

Ok, i tried openrouter.

It created the files in the workspace. Cool. Only thing is Still it does not use its browser correctly.

Here is my complete chat:

Hello! How can I assist you today?

create an html page that adds 2 numbers that are input into text fields. When i click oln the button, compute and display the answer

Sure! Let's start by creating an HTML file for this task.

First, I'll create a new file named index.html:

Great! Now, I'll add the HTML content to the index.html file. This content will include two input fields for the numbers, a button to compute the sum, and a section to display the result.

Let's proceed with that:

The HTML file has been created and updated successfully. You can now open the index.html file in a web browser to see the form for adding two numbers.

Would you like me to start a simple HTTP server so you can access this HTML page through a web browser?

yes

Let's start a simple HTTP server using Python to serve the index.html file. I'll use Python's built-in http.server module for this purpose.

I'll start the server on port 8000:

The HTTP server is running on port 8000. You can access the index.html page by navigating to the following URL in your web browser:

http://127.0.0.1:8000/index.html

If you have any further questions or need additional assistance, feel free to ask!

Continue

If you have any further tasks or modifications you'd like to make, please let me know! I'm here to help.

If everything is working as expected and you have no further requests, you can let me know, and I'll mark the task as complete.

run the application in your browser

I'm asking BrowsingAgent for help with this task.

In order to accomplish my goal, I need to navigate to the server's homepage at http://127.0.0.1:8000/index.html

In order to accomplish my goal of getting the content on "http://127.0.0.1:8000/index.html", I need to wait for the page to load and then review the content.

In order to accomplish my goal of getting the content on "http://127.0.0.1:8000/index.html", I need to wait for the page to load completely.

In order to accomplish my goal, I need to get the content of the page at "http://127.0.0.1:8000/index.html". Since the page has already been loaded, I will wait for a moment to ensure the content is fully rendered.

In order to accomplish my goal, I need to get the content of the homepage at "http://127.0.0.1:8000/index.html". I will use the goto action to navigate to the URL.

In order to accomplish my goal of getting the content on "http://127.0.0.1:8000/index.html", I need to ensure that the page has fully loaded. Since the current observation shows an empty accessibility tree, I will wait for a bit longer to see if the content loads.

In order to accomplish my goal, I need to navigate to the server's homepage and get its content. Since the previous attempts to load the page might not have been successful, I will try to navigate to the homepage again.

In order to accomplish my goal, I need to navigate to the server's homepage and retrieve its content. Since the previous attempts to navigate to "http://127.0.0.1:8000/index.html" did not yield any content, I will try to refresh the page to see if it loads correctly.

In order to accomplish my goal, I need to navigate to the server's homepage and retrieve its content. Since the previous attempts to load the page have not yielded any content, I will try to refresh the page to see if it resolves the issue.

In order to accomplish my goal of getting the content on "http://127.0.0.1:8000/index.html", I need to ensure that the page is loaded correctly. Since the current observation shows an empty accessibility tree, it suggests that the page might not be loading properly. I will try to refresh the page to see if it resolves the issue.

Too many errors encountered. Task failed.

All done! What's next on the agenda?

BrowsingAgent finishes task with content: Too many errors encountered. Task failed.

It seems that browsing the local server directly from this environment is not possible due to restrictions. However, you can access the page by opening the following URL in your web browser:

http://127.0.0.1:8000/index.html

This will allow you to interact with the HTML page and test the functionality of adding two numbers.

If you need any further assistance or modifications, please let me know!