All-Hands-AI / OpenHands

🙌 OpenHands: Code Less, Make More
https://all-hands.dev
MIT License
31.27k stars 3.61k forks source link

Odevin refuses to connect locally - Goes to OpenAI #1464

Closed zeta274 closed 4 months ago

zeta274 commented 4 months ago

Is there an existing issue for the same bug?

Describe the bug

There was the issue of OD demanding OpenAI through token and it was posted that this was patched out, but I don't see it. I followed the instructions, set it up, but OD still wants to connect to OpenAI, even though I set the localhost:port to my current local OpenAI-like server. This same server (VLLM) works fine with other similar software solutions, so I know the problem isn't with it.

Current Version

ghcr.io/opendevin/opendevin:0.4.0

Installation and Configuration

export LLM_API_KEY="KEY" -v $WORKSPACE_BASE:/opt/workspdocker run     -e LLM_API_KEY     -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE     -v $WORKSPACE_BASE:/opt/workspace_base     -v /var/run/docker.sock:/var/run/docker.sock     -p 3000:3000     --add-host host.docker.internal:host-gateway     ghcr.io/opendevin/opendevin:0.4.0

Model and Agent

Should be a TheBloke model, but it insists on ChatGPT3.5

Reproduction Steps

No response

Logs, Errors, Screenshots, and Additional Context

No response

enyst commented 4 months ago

@zeta274 Can you please start the app in the browser, and enter the model in the Settings there? The model name you enter and save in the UI is the model name that will be used.

zeta274 commented 4 months ago

LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=meta-llama/Meta-Llama-3-8B-Instruct

rbren commented 4 months ago

@zeta274 that doesn't look like a valid model name to me. You might want something like ollama/Meta-Llama-3-8B-Instruct

rbren commented 4 months ago

Going to close this one as I think it's just a model name issue (and the fact that it needs to be set in the UI)

rbren commented 4 months ago

But feel free to ping this thread if you're still having trouble!

zeta274 commented 4 months ago

I'm not using Ollama, I'm on VLLM, with the OpenAI-like API.

enyst commented 4 months ago

@zeta274 Can you add the model name in the UI? Please if it still doesn't work, tell what exactly are you passing, both in the command to run and in the UI.