All-Hands-AI / OpenHands

🙌 OpenHands: Code Less, Make More
https://all-hands.dev
MIT License
33.07k stars 3.78k forks source link

[Bug]: wheel gear button not showing and "/api/configurations HTTP/1.1" 404 Not Found #1356

Closed stonelzhang closed 5 months ago

stonelzhang commented 5 months ago

Is there an existing issue for the same bug?

Describe the bug

When I run OpenDevin from host (not by start a docker), the terminal returned a message: ... INFO: connection open INFO: 127.0.0.1:57024 - "GET /api/litellm-models HTTP/1.1" 200 OK INFO: 127.0.0.1:57025 - "GET /api/agents HTTP/1.1" 200 OK INFO: 127.0.0.1:57029 - "GET /api/messages/total HTTP/1.1" 200 OK INFO: 127.0.0.1:57028 - "GET /api/configurations HTTP/1.1" 404 Not Found INFO: 127.0.0.1:57030 - "GET /api/agents HTTP/1.1" 200 OK

And looks like OpenDevin still loads OpenAI as LLM model: 16:15:30 - opendevin:INFO: agent.py:145 - Creating agent MonologueAgent using LLM gpt-3.5-turbo 16:15:30 - opendevin:INFO: llm.py:52 - Initializing LLM with model: gpt-3.5-turbo 16:15:31 - opendevin:INFO: ssh_box.py:357 - Container stopped 16:15:31 - opendevin:WARNING: ssh_box.py:369 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information.

The left bottom corner appears a "pause" button instead of a gear wheel. (I started OpenDevin successfully by using a docker)

I couldn't start a task seems an agent was running at the start of OpenDevin and wouldn't stop. I'm using MacOs M1 chip, 16GB memory.

Here's the .env file I load it before "make run" export LLM_API_KEY="ollama" export LLM_MODEL="ollama/codellama:7b" export LLM_BASE_URL="http://localhost:11434" export LLM_EMBEDDING_MODEL="local" export WORKSPACE_BASE=/Users/zhanglei/opendevin

Here's the config.toml: bash-3.2$ cat OpenDevin/config.toml LLM_MODEL="ollama/codellama:7b" LLM_API_KEY="ollama" LLM_EMBEDDING_MODEL="local" WORKSPACE_BASE="/Users/zhanglei/opendevin"

Current Version

Appology I'm not sure how to check this. The docker container is more specific for this information, but I run it on local host.

Installation and Configuration

These are major steps I went through to kicked off OpenDevin: 

git clone https://github.com/OpenDevin/OpenDevin.git

cd OpenDevin

make build

vi .env
Enter below content:
export LLM_API_KEY="ollama"
export LLM_MODEL="ollama/codellama:7b"
export LLM_BASE_URL="http://localhost:11434"
export LLM_EMBEDDING_MODEL="local"
export WORKSPACE_BASE=/Users/zhanglei/opendevin

source .env

Also make setup-config with below content
bash-3.2$ cat OpenDevin/config.toml 
LLM_MODEL="ollama/codellama:7b"
LLM_API_KEY="ollama"
WORKSPACE_BASE="/Users/zhanglei/opendevin"

Finally launch OpenDevin
make run

Model and Agent

Reproduction Steps

git clone https://github.com/OpenDevin/OpenDevin.git

cd OpenDevin

make build make setup-config

vi .env Enter below content: export LLM_API_KEY="ollama" export LLM_MODEL="ollama/codellama:7b" export LLM_BASE_URL="http://localhost:11434" export LLM_EMBEDDING_MODEL="local" export WORKSPACE_BASE=/Users/zhanglei/opendevin

source .env

Also make setup-config with below content bash-3.2$ cat OpenDevin/config.toml LLM_MODEL="ollama/codellama:7b" LLM_API_KEY="ollama" WORKSPACE_BASE="/Users/zhanglei/opendevin"

Finally launch OpenDevin make run

Logs, Errors, Screenshots, and Additional Context

![Uploading image.png…]() INFO: ('127.0.0.1', 58698) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiIwODQwNjI1Ni05YWNmLTRmODQtYmQ1MC0yNmE2NDRjOGQ5N2YifQ.hyOBcYdj91sQ35H82a9gbSaodLwNUcoa8CabfqkwzAc" [accepted] Starting loop_recv for sid: 08406256-9acf-4f84-bd50-26a644c8d97f INFO: connection open INFO: 127.0.0.1:58702 - "GET /api/messages/total HTTP/1.1" 200 OK INFO: 127.0.0.1:58701 - "GET /api/configurations HTTP/1.1" 404 Not Found INFO: ('127.0.0.1', 58711) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiJhOTc3OTU4Yi1hYjNjLTQyZTQtOGM2YS0yNGM5YjI4YzBkNzcifQ.l97z1jt2ax3xxhcx5j9U-62o6AJYEUlAHQPawrPuk4c" [accepted] Starting loop_recv for sid: a977958b-ab3c-42e4-8c6a-24c9b28c0d77 INFO: connection open 16:15:30 - opendevin:INFO: agent.py:145 - Creating agent MonologueAgent using LLM gpt-3.5-turbo #still gtp 16:15:30 - opendevin:INFO: llm.py:52 - Initializing LLM with model: gpt-3.5-turbo 16:15:31 - opendevin:INFO: ssh_box.py:357 - Container stopped 16:15:31 - opendevin:WARNING: ssh_box.py:369 - Using port forwarding for Mac OS. Server started by OpenDevin will not be accessible from the host machine at the moment. See https://github.com/OpenDevin/OpenDevin/issues/897 for more information. 16:15:31 - opendevin:INFO: ssh_box.py:377 - Mounting workspace directory: /Users/zhanglei/opendevin 16:15:32 - opendevin:INFO: ssh_box.py:400 - Container started

johnnyaug commented 5 months ago

It seems like your client and server are not compatible. The /api/configurations endpoint was removed in #1169, so it's probably the frontend which is not up to date. Make sure your code is at the latest commit from main.

rbren commented 5 months ago

Seems like you're not on the latest code, as johnnyaug points out. Gear wheel is now on the bottom-right

rbren commented 5 months ago

Going to close this one since it seems to be not on the latest. LMK if it needs to be reopened though