All-Hands-AI / OpenHands

🙌 OpenHands: Code Less, Make More
https://all-hands.dev
MIT License
31.3k stars 3.61k forks source link

Auto continue? + Remove some annoying limitations. #3735

Open imeDevelopers opened 1 week ago

imeDevelopers commented 1 week ago

This tool is perfect & excellent. We're so happy and thankful that OpenHands exists, we will never can copy the feelings to a written text.

However there are things that need to be improved ..

1: The model sometimes asking the user to continue doing the task or a step (or sometimes the model stop working without errors waiting the user to tell it to continue the work), why don't you add an option that give the user the ability to tell the model to continue the step (and the future steps) automatically if the model asked for a continue?!

adding this feature will make the user to take a rest until the model complete the task (as it may take hours if it's complex enough), I want to tell it to do something that will be really complex and just relax or even sleep and when I wakeup I find the results! isn't this sounds good?

2: Annoying limitation .. when we trying to try a new model listed on OpenRouter (but not listed on OpenHands). Why open hands change what we're writing in the "Select a model" field in the settings window?

You shouldn't do this, just (simply) give me the ability to write what ever I want in that field as a model name and (simply) if you can't connect to that model just tell me! and show an error ..

don't change what I write in that field, when I lose the focus the field automatically changes itself to an existing model in the list ..

keep it custom to the users too ..

I just wanted to try "google/gemini-pro-1.5-exp" but the field gets back to "google/gemini-pro-1.5" and here I can't use open router!

even the "custom model" field is accepting a url (not even a custom model name for the selected provider!) ..

or simply make the "custom model" accept an endpoint and model name and api key ..

not only the endpoint and the api key ..

I hope you understand these situations, thank you!

imeDevelopers commented 1 week ago

Why something like this: nousresearch/hermes-3-llama-3.1-405b

Is not supported in the OpenRouter select model list?!

These limitations destroying the entire tool and make the users search for alternatives!

mamoodi commented 1 week ago

Hey @imeDevelopers. Thanks so much for taking the time to write all this out. For your second issue, in the settings modal if you toggle Use custom model and then put google/gemini-pro-1.5-exp for the model, it doesn't work?

imeDevelopers commented 1 week ago

@mamoodi

Hey @imeDevelopers. Thanks so much for taking the time to write all this out. For your second issue, in the settings modal if you toggle Use custom model and then put google/gemini-pro-1.5-exp for the model, it doesn't work?

Yes it doesn't work .. but!

After a very long series of re-try to make the tool work I noticed something in the terminal that give me an understand of how the selected model being processed internally as a "path" to the model, at the end I used "openrouter/google/gemini-pro-1.5-exp" as provider/publisher/model, I suffered to reach a such information!

however this "openrouter/nousresearch/hermes-3-llama-3.1-405b" worked and I was able to contact with the model but this "openrouter/google/gemini-pro-1.5-exp" no (because I am still getting errors, a lot of them!)

Here's a new problem now:

WORKSPACE_BASE=$(pwd)/workspace

docker run -it --pull=always \
    -e SANDBOX_RUNTIME_CONTAINER_IMAGE=ghcr.io/all-hands-ai/runtime:0.9.2-nikolaik \
    -e SANDBOX_USER_ID=$(id -u) \
    -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
    -v $WORKSPACE_BASE:/opt/workspace_base \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -p 3000:3000 \
    --add-host host.docker.internal:host-gateway \
    --name openhands-app-$(date +%Y%m%d%H%M%S) \
    ghcr.io/all-hands-ai/openhands:0.9
0.9: Pulling from all-hands-ai/openhands
Digest: sha256:0d511d5b76d8d65506241ccd0ca88b408458712326ceff65ae84a9dc1af93b87
Status: Image is up to date for ghcr.io/all-hands-ai/openhands:0.9
Starting OpenHands...
Running OpenHands as root
INFO:     Started server process [10]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
INFO:     172.17.0.1:40780 - "GET / HTTP/1.1" 200 OK
INFO:     172.17.0.1:40780 - "GET /assets/index-DOvY1mIk.js HTTP/1.1" 200 OK
INFO:     172.17.0.1:40782 - "GET /assets/index--zaQvvCW.css HTTP/1.1" 200 OK
INFO:     172.17.0.1:40782 - "GET /locales/en/translation.json HTTP/1.1" 200 OK
INFO:     172.17.0.1:40780 - "GET /locales/en-US/translation.json HTTP/1.1" 404 Not Found
INFO:     172.17.0.1:40782 - "GET /favicon-32x32.png HTTP/1.1" 200 OK
INFO:     172.17.0.1:40782 - "GET /api/options/models HTTP/1.1" 200 OK
INFO:     172.17.0.1:40782 - "GET /api/options/agents HTTP/1.1" 200 OK
INFO:     172.17.0.1:40782 - "GET /api/options/security-analyzers HTTP/1.1" 200 OK
INFO:     ('172.17.0.1', 43786) - "WebSocket /ws" [accepted]
INFO:     connection open
04:58:55 - openhands:WARNING: llm.py:85 - Could not get model info for openrouter/google/gemini-pro-1.5-exp:
This model isn't mapped yet. model=openrouter/google/gemini-pro-1.5-exp, custom_llm_provider=openrouter. Add it here - https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json.
04:58:55 - openhands:INFO: agent.py:79 - Using security analyzer: 
04:58:55 - openhands:INFO: agent.py:90 - Initializing runtime `eventstream` now...
04:58:55 - openhands:INFO: runtime.py:183 - Starting container with image: ghcr.io/all-hands-ai/runtime:0.9.2-nikolaik and name: openhands-sandbox-620acf0b-b6cf-4ef8-9f3a-f23dbdd1857c_9ca82228-7a43-4085-ba24-4a77f2ff7ab6
04:58:55 - openhands:INFO: runtime.py:204 - Mount dir: /workspace
04:58:56 - openhands:INFO: runtime.py:238 - Container started. Server url: http://host.docker.internal:60967
04:58:56 - openhands:INFO: runtime.py:157 - Container initialized with plugins: ['agent_skills', 'jupyter']
04:58:56 - openhands:INFO: runtime.py:160 - Container initialized with env vars: None
04:58:56 - openhands:INFO: agent.py:116 - Agents: {'agent': AgentConfig(micro_agent_name=None, memory_enabled=False, memory_max_threads=2, llm_config=None)}
04:58:56 - openhands:INFO: agent.py:117 - Creating agent CodeActAgent using LLM openrouter/google/gemini-pro-1.5-exp
04:58:56 - openhands:ERROR: state.py:118 - Failed to restore state from session: sessions/620acf0b-b6cf-4ef8-9f3a-f23dbdd1857c/agent_state.pkl
04:58:56 - openhands:INFO: agent.py:139 - Error restoring state: sessions/620acf0b-b6cf-4ef8-9f3a-f23dbdd1857c/agent_state.pkl
04:58:56 - openhands:INFO: agent.py:140 - Agent controller initialized.
04:58:56 - openhands:INFO: session.py:139 - Server event
04:58:56 - openhands:INFO: agent_controller.py:150 - [Agent Controller 620acf0b-b6cf-4ef8-9f3a-f23dbdd1857c] Starting step loop...
04:58:56 - openhands:INFO: session.py:139 - Server event
04:58:56 - openhands:INFO: runtime.py:263 - 
------------------------------Container logs:------------------------------
    |exec /openhands/miniforge3/bin/mamba: no such file or directory
------------------------------------------------------------------------------------------
mamoodi commented 1 week ago

@amanape these are good suggestions for the model selection. I have also noticed it being quite stubborn and even though it let's you type in the box, it reverts to whatever it wants when you don't have Custom model selected.

imeDevelopers commented 1 week ago

@mamoodi @amanape This issue is urgent, the tool not works! |exec /openhands/miniforge3/bin/mamba: no such file or directory

amanape commented 1 week ago

@imeDevelopers Sorry for the late response. Right now it is set to a fixed providers and models returned by LiteLLM. After looking through the documentation, I am aware that there are some models that the user can enter themselves on top of the provider, but it is not supported (as you said its only supporting the displayed providers and models).

The alternative is to use custom model and set the path yourself. In the case where you need to set a base URL on top of a custom URL, I hope to have that done by the end of this week (there is a missing input).

For now, please create a separate issue other than this one, specifically for the problem you are facing and your suggestions. I will take a look at it and hope to have it solved by tomorrow!

imeDevelopers commented 1 week ago

@amanape @mamoodi #3748

imeDevelopers commented 1 week ago

image

tobitege commented 1 week ago

Not everytime the "Continue" appears there is something to continue, but it is rather the LLM waiting for further input on what to do next. Do you have examples where the LLM gives a reply that a automated continue would actually make sense?

imeDevelopers commented 1 week ago

@tobitege There are very wide range of examples. each user has his own situation ..

You can add this option that works automatically if the "Continue" button appears to automatically press this button "Automatically" .. as the user himself is the only person who can rate the situation each on his own case ..

and there must be another feature called "Termination trigger" that tell the user that the model is now completed the task as "exactly" requested and now should be "terminated", in that case the "auto continue" feature can simply stop sending the "continue" word (and shouldn't appear again if the "Termination trigger" triggered ..)

mamoodi commented 1 week ago

Thanks @imeDevelopers. Your feature requests are valid. This is an open source project so when an issue is opened, anyone can pick it up and start working on it if they want! That can include you :) !

Being a new-ish project, there is a lot of enhancement requests. Hopefully as the project matures, these types of enhancements can be added to make the experience better.

imeDevelopers commented 1 week ago

Thank you @mamoodi I appreciate your understanding. I am currently very busy with other things and do not have time to develop this project + I also need some time to understand the project and how it works internally so that I can add these important additions. this may happen during time.

imeDevelopers commented 1 week ago

@mamoodi We all should use OpenHands to extremely enhance OpenHands 😂

But it sadly not works for more.

mamoodi commented 1 week ago

It seems we have identified an issue with the docker images so hopefully we can sort this out soon and you can try again.