All-Hands-AI / OpenHands

🙌 OpenHands: Code Less, Make More
https://all-hands.dev
MIT License
33.31k stars 3.82k forks source link

Network in docker (NoneType has no attribute 'request') #1202

Closed pegostar closed 6 months ago

pegostar commented 6 months ago

When I run Ollama on my local PC with model gemma:2b I get a response. My rest call works, below is a print screen:

image

When I run open devin with docker using this template

docker run -e LLM_API_KEY="ollama" -e LLM_MODEL="ollama/gemma:2b" -e LLM_EMBEDDING_MODEL="local" -e LLM_BASE_URL="http://localhost:11434" -e WORKSPACE_DIR="C:\Projects\IA\Workspace" -e SANDBOX_TYPE="exec" -e WORKSPACE_MOUNT_PATH="C:\Projects\IA\Workspace" -v "C:\Projects\IA\Workspace":/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 ghcr.io/opendevin/opendevin:main python opendevin/main.py --task "write a bash script that prints hello"

I get the following error

07:07:19 - PLAN write a bash script that prints hello Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 198, in _new_conn sock = connection.create_connection( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/urllib3/util/connection.py", line 85, in create_connection raise err File "/app/.venv/lib/python3.12/site-packages/urllib3/util/connection.py", line 73, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 793, in urlopen response = self._make_request( ^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 496, in _make_request conn.request( File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 400, in request self.endheaders() File "/usr/local/lib/python3.12/http/client.py", line 1331, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/local/lib/python3.12/http/client.py", line 1091, in _send_output self.send(msg) File "/usr/local/lib/python3.12/http/client.py", line 1035, in send self.connect() File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 238, in connect self.sock = self._new_conn() ^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 213, in _new_conn raise NewConnectionError( urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f6d864f0200>: Failed to establish a new connection: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/requests/adapters.py", line 486, in send resp = conn.urlopen( ^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 847, in urlopen retries = retries.increment( ^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/urllib3/util/retry.py", line 515, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6d864f0200>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 1912, in completion generator = ollama.get_ollama_response( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/llms/ollama.py", line 194, in get_ollama_response response = requests.post( ^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/requests/api.py", line 115, in post return request("post", url, data=data, json=json, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/requests/api.py", line 59, in request return session.request(method=method, url=url, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, send_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/requests/adapters.py", line 519, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f6d864f0200>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/app/agenthub/monologue_agent/utils/monologue.py", line 73, in condense resp = llm.completion(messages=messages) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 289, in wrapped_f return self(f, *args, kw) ^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 379, in call do = self.iter(retry_state=retry_state) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 314, in iter return fut.result() ^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result return self.get_result() ^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in get_result raise self._exception File "/app/.venv/lib/python3.12/site-packages/tenacity/init.py", line 382, in call result = fn(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^ File "/app/opendevin/llm/llm.py", line 48, in wrapper resp = completion_unwrapped(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2947, in wrapper raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2845, in wrapper result = original_function(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2127, in completion raise exception_type( ^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8573, in exception_type raise e File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8381, in exception_type raise ServiceUnavailableError( ^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/.venv/lib/python3.12/site-packages/litellm/exceptions.py", line 153, in init super().init( File "/app/.venv/lib/python3.12/site-packages/openai/_exceptions.py", line 81, in init super().init(message, response.request, body=body) ^^^^^^^^^^^^^^^^ AttributeError: 'NoneType' object has no attribute 'request' 07:07:19 - opendevin:ERROR: agent_controller.py:106 - Error condensing thoughts: 'NoneType' object has no attribute 'request' 07:07:19 - OBSERVATION Error condensing thoughts: 'NoneType' object has no attribute 'request'

gtsop-d commented 6 months ago

Had exactly the same problem, I tried adding --add-host host.docker.internal:host-gateway and --network="host" in the docker run command. Worked for me, running on linux.

https://stackoverflow.com/questions/24319662/from-inside-of-a-docker-container-how-do-i-connect-to-the-localhost-of-the-mach

pegostar commented 6 months ago

Changing from

-e LLM_BASE_URL="http://localhost:11434"

to

-e LLM_BASE_URL="http://host.docker.internal:11434"

It keeps on giving

.... 09:40:18 - PLAN write a bash script that prints hello 09:40:20 - ACTION AgentThinkAction(thought="Let's review the previous actions and their results. I need to ensure that I'm doing things in the right order and that I'm not missing any critical steps.", action=<ActionType.THINK: 'think'>) ...

Without returning any results, I tried to set your configurations but it doesn't work for me

gtsop-d commented 6 months ago

Which OS are you running? Just to verify, the command I am running is this:

export LLM_API_KEY=""
export LLM_MODEL="ollama/mistal:7b"
export LLM_EMBEDDING_MODEL="mistral:7b"
export LLM_BASE_URL="http://localhost:11434"
export WORKSPACE_DIR="$(pwd)/workspace"

docker run \
  -e LLM_API_KEY \
  -e LLM_MODEL \
  -e LLM_EMBEDDING_MODEL \
  -e LLM_BASE_URL \
  -e WORKSPACE_MOUNT_PATH=$WORKSPACE_DIR \
  -v $WORKSPACE_DIR:/opt/workspace_base \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -p 3000:3000 \
  --add-host host.docker.internal:host-gateway \
  --network="host" \
  ghcr.io/opendevin/opendevin:main

Edit:

09:40:18 - PLAN write a bash script that prints hello 09:40:20 - ACTION AgentThinkAction(thought="Let's review the previous actions and their results. I need to ensure that I'm doing things in the right order and that I'm not missing any critical steps.", action=<ActionType.THINK: 'think'>)

this is a different error than before (actually it is not even an error), it looks like you are getting a reply from your local model, which means the original issue is being fixed

rbren commented 6 months ago

🤔 it looks like the summarization step is still trying to use OpenAI, even though you've specified a different model...

rbren commented 6 months ago

@pegostar the fact that it keeps on giving the same message back isn't super surprising--the local/OSS models aren't very good

pegostar commented 6 months ago

@rbren I use Windows. And I use this command

docker run -e LLM_API_KEY="" -e LLM_MODEL="ollama/gemma:2b" -e LLM_EMBEDDING_MODEL="gemma:2b" -e LLM_BASE_URL="http://localhost:11434" -e WORKSPACE_DIR="C:\Projects\IA\Workspace" -e SANDBOX_TYPE="exec" -e WORKSPACE_MOUNT_PATH="C:\Projects\IA\Workspace" -v "C:\Projects\IA\Workspace":/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 --add-host host.docker.internal:host-gateway --network="host" ghcr.io/opendevin/opendevin:main python opendevin/main.py --task "write a bash script that prints hello"

And this is my output:

STEP 0

11:52:43 - PLAN write a bash script that prints hello 11:52:46 - opendevin:ERROR: agent_controller.py:107 - 'NoneType' object has no attribute 'request' 11:52:46 - OBSERVATION 'NoneType' object has no attribute 'request'

If change localhost with http://host.docker.internal:11434

For example: docker run -e LLM_API_KEY="" -e LLM_MODEL="ollama/gemma:2b" -e LLM_EMBEDDING_MODEL="gemma:2b" -e LLM_BASE_URL="http://host.docker.internal:11434" -e WORKSPACE_DIR="C:\Projects\IA\Workspace" -e SANDBOX_TYPE="exec" -e WORKSPACE_MOUNT_PATH="C:\Projects\IA\Workspace" -v "C:\Projects\IA\Workspace":/opt/workspace_base -v /var/run/docker.sock:/var/run/docker.sock -p 3000:3000 --add-host host.docker.internal:host-gateway --network="host" ghcr.io/opendevin/opendevin:main python opendevin/main.py --task "write a bash script that prints hello"

This is my output

============== STEP 0

11:54:21 - PLAN write a bash script that prints hello 11:54:25 - ACTION AgentThinkAction(thought="I should probably start by running ls to see what's here.", action=<ActionType.THINK: 'think'>)

============== STEP 1

11:54:25 - PLAN write a bash script that prints hello 11:54:27 - ACTION AgentThinkAction(thought="Let's consider our next steps. What should we do next?", action=<ActionType.THINK: 'think'>)

...

no file in workspace

gtsop-d commented 6 months ago

The fact that you went from step 0 to step 1 means you got a response from your local LLM, so your initial error seems resolved. The fact that you don't get valuable output from your LLM has to do with the performance of gemma:2b. I get the exact output if I use gemma:2b. Mine actually gets stack in a loop trying to ls. But this I feel is a different problem to the original:

==============
STEP 0

13:10:38 - PLAN
write me a hello world bash script
13:11:55 - ACTION
AgentThinkAction(thought="I should probably start by running `ls` to see what's here.", action=<ActionType.THINK: 'think'>)

==============
STEP 1

13:11:55 - PLAN
write me a hello world bash script
13:12:29 - ACTION
AgentThinkAction(thought="I should probably start by running `ls` to see what's here.", action=<ActionType.THINK: 'think'>)

==============
STEP 2

13:12:29 - PLAN
write me a hello world bash script
13:13:02 - ACTION
AgentThinkAction(thought="I should probably start by running `ls` to see what's here.", action=<ActionType.THINK: 'think'>)

==============
pegostar commented 6 months ago

@gtsop-d If you prefer, you can open a new bug. I chose gemma2 because it was the fastest among the models. If I use another model the problem does not change.... Opendevin doesn't start under windows...

enyst commented 6 months ago

🤔 it looks like the summarization step is still trying to use OpenAI, even though you've specified a different model...

Yes that looks confusing, but a number of Litellm exceptions are subclasses of openai exceptions. That may show openai in the log when it never actually went to openai. It seems the case here, when it shows up only in init.

pegostar commented 6 months ago

@enyst If I try to call Ollama bees I will be responsive. See the beginning of the messages

enyst commented 6 months ago

@pegostar I apologize for the confusion, I wasn't doubting that. I see it starts doing steps, so it is connecting now, and it runs into this issue https://github.com/OpenDevin/OpenDevin/issues/326

Can you please make sure to update to the latest version? git pull There's been a strange little issue fixed, which may help with what the LLM "understands" it has to do.

Umpire2018 commented 6 months ago

Had exactly the same problem, I tried adding --add-host host.docker.internal:host-gateway and --network="host" in the docker run command. Worked for me, running on linux.

@gtsop-d @PierrunoYT https://docs.docker.com/reference/cli/docker/container/run/#add-host

sudo docker run \
  --add-host host.docker.internal=host-gateway \
  -e LLM_API_KEY="ollama" \
  -e LLM_MODEL="ollama:gemma:2b" \
  -e LLM_EMBEDDING_MODEL="local" \
  -e LLM_BASE_URL="http://host.docker.internal:11434" \
  -e WORKSPACE_MOUNT_PATH="$WORKSPACE_DIR" \
  -v "$WORKSPACE_DIR:/opt/workspace_base" \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -e SANDBOX_TYPE=exec \
  ghcr.io/opendevin/opendevin:main python opendevin/main.py --task "write a bash script that prints hello"

This is my config for reference, running on linux. Using --add-host host.docker.internal=host-gateway and -e LLM_BASE_URL="http://host.docker.internal:11434" is the key.

Maybe this can close #897 @rbren

But it goes to 'NoneType' object has no attribute 'request' :( Using lasest main, 6b0408d47ca06153663424d3befb76f33cad0300.

spoonbobo commented 6 months ago

Same issue .

enyst commented 6 months ago

@spoonbobo do you mean "NoneType object has no attribute request" error?

spoonbobo commented 6 months ago

@enyst

  File "/app/.venv/lib/python3.12/site-packages/openai/_exceptions.py", line 81, in __init__
    super().__init__(message, response.request, body=body)
                              ^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'request'
17:25:38 - opendevin:ERROR: agent_controller.py:178 - Error condensing thoughts: 'NoneType' object has no attribute 'request'
17:25:38 - OBSERVATION
Error condensing thoughts: 'NoneType' object has no attribute 'request'

Yup, some support and code fix should be done to ollama serving, not sure why openai exception is showing up here. I don't even have a openai key though...

enyst commented 6 months ago

@spoonbobo If that's the only place it appears in the error, then it's just showing up because LiteLLM uses some class definitions for its OpenAI compatibility. It doesn't mean this request went to OpenAI.

Can you please try the suggestion in the 2nd comment above, or the 4th comment?

spoonbobo commented 6 months ago

@spoonbobo If that's the only place it appears in the error, then it's just showing up because LiteLLM uses some class definitions for its OpenAI compatibility. It doesn't mean this request went to OpenAI.

Can you please try the suggestion in the 2nd comment above, or the 4th comment?

Nice! 4th comment worked for me.

fengyunzaidushi commented 6 months ago

@spoonbobo If that's the only place it appears in the error, then it's just showing up because LiteLLM uses some class definitions for its OpenAI compatibility. It doesn't mean this request went to OpenAI.如果这是错误中唯一出现的地方,那么它的出现只是因为 LiteLLM 使用了一些类定义来兼容 OpenAI。这并不意味着该请求转到了 OpenAI。

Can you please try the suggestion in the 2nd comment above, or the 4th comment?您能试试上面第 2 条评论或第 4 条评论中的建议吗?

i have the same issue,how do you solved that? i have tried this, it do not work:

docker run \
    --add-host host.docker.internal=host-gateway \
    --network="host" \
    -e LLM_API_KEY="ollama" \
    -e LLM_BASE_URL="http://host.docker.internal:11434" \
    -e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
    -v $WORKSPACE_BASE:/opt/workspace_base \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -p 3000:3000 \
    ghcr.io/opendevin/opendevin:main 
fengyunzaidushi commented 6 months ago

here is my llama list:

(base) ubuntu@ubuntu:~$ ollama list
NAME            ID              SIZE    MODIFIED      
codellama:7b    8fdf8f752f6e    3.8 GB  3 minutes ago   
llama3:latest   a6990ed6be41    4.7 GB  6 hours ago 

here is my docker:

(base) ubuntu@ubuntu:/mnt/sda/02downloads/squashfs-root$ docker ps
CONTAINER ID   IMAGE                              COMMAND                   CREATED          STATUS          PORTS                                           NAMES
4c6da0fc6fa5   ghcr.io/opendevin/sandbox:main     "/usr/sbin/sshd -D -…"   17 minutes ago   Up 17 minutes   0.0.0.0:58893->58893/tcp, :::58893->58893/tcp   opendevin-sandbox-64434f6e-90d0-4d1b-a212-a3b0b7aff1fb
111e23d153c5   ghcr.io/opendevin/opendevin:main   "uvicorn opendevin.s…"   17 minutes ago   Up 17 minutes   0.0.0.0:3000->3000/tcp, :::3000->3000/tcp       serene_dijkstra

I have tried both llama7b and llama3, it still the problem:

still the problem

==============
STEP 0

10:53:00 - PLAN
123
10:53:00 - opendevin:ERROR: agent_controller.py:103 - Error in loop
Traceback (most recent call last):
  File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 198, in _new_conn
    sock = connection.create_connection(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/urllib3/util/connection.py", line 85, in create_connection
    raise err
  File "/app/.venv/lib/python3.12/site-packages/urllib3/util/connection.py", line 73, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 793, in urlopen
    response = self._make_request(
               ^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 496, in _make_request
    conn.request(
  File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 400, in request
    self.endheaders()
  File "/usr/local/lib/python3.12/http/client.py", line 1331, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/usr/local/lib/python3.12/http/client.py", line 1091, in _send_output
    self.send(msg)
  File "/usr/local/lib/python3.12/http/client.py", line 1035, in send
    self.connect()
  File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 238, in connect
    self.sock = self._new_conn()
                ^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 213, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f82592dfc20>: Failed to establish a new connection: [Errno 111] Connection refused

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/app/.venv/lib/python3.12/site-packages/requests/adapters.py", line 486, in send
    resp = conn.urlopen(
           ^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 847, in urlopen
    retries = retries.increment(
              ^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/urllib3/util/retry.py", line 515, in increment
    raise MaxRetryError(_pool, url, reason) from reason  # type: ignore[arg-type]
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f82592dfc20>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 1926, in completion
    generator = ollama.get_ollama_response(
                ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/litellm/llms/ollama.py", line 194, in get_ollama_response
    response = requests.post(
               ^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/requests/api.py", line 115, in post
    return request("post", url, data=data, json=json, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
    r = adapter.send(request, **kwargs)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/requests/adapters.py", line 519, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='host.docker.internal', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f82592dfc20>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/app/opendevin/controller/agent_controller.py", line 99, in _run
    finished = await self.step(i)
               ^^^^^^^^^^^^^^^^^^
  File "/app/opendevin/controller/agent_controller.py", line 212, in step
    action = self.agent.step(self.state)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/agenthub/monologue_agent/agent.py", line 236, in step
    resp = self.llm.completion(messages=messages)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 289, in wrapped_f
    return self(f, *args, **kw)
           ^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 379, in __call__
    do = self.iter(retry_state=retry_state)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 314, in iter
    return fut.result()
           ^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/app/.venv/lib/python3.12/site-packages/tenacity/__init__.py", line 382, in __call__
    result = fn(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^
  File "/app/opendevin/llm/llm.py", line 86, in wrapper
    resp = completion_unwrapped(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 3077, in wrapper
    raise e
  File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 2975, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/litellm/main.py", line 2148, in completion
    raise exception_type(
          ^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8823, in exception_type
    raise e
  File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 8632, in exception_type
    raise ServiceUnavailableError(
          ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/.venv/lib/python3.12/site-packages/litellm/exceptions.py", line 157, in __init__
    super().__init__(
  File "/app/.venv/lib/python3.12/site-packages/openai/_exceptions.py", line 82, in __init__
    super().__init__(message, response.request, body=body)
                              ^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'request'

thank you very much!

my system is ubuntu 22.04 LTS

sevspo commented 4 months ago

Same here, I sort of got it to work with the run command from @gtsop-d, except that i am using ollama/phi3 as model, but then CodeActAgent does not seem to work and I get a pure Chat interface. I am on ubuntu 22.04 as well

tobitege commented 4 months ago

@sevspo can you elaborate what "but then CodeActAgent does not seem to work" means or looks like?

sevspo commented 4 months ago

@tobitege The Agent answers questions and gives me code snippets to execute, but does not seem to execute commands, since i do not see any output in the integrated readonly terminal. Not super helpful, I know, but I am getting very inconsistent results.

tobitege commented 4 months ago

@tobitege The Agent answers questions and gives me code snippets to execute, but does not seem to execute commands, since i do not see any output in the integrated readonly terminal. Not super helpful, I know, but I am getting very inconsistent results.

Yes, something like bash commands go to lower terminal, Python code execution to top right IPythonExecute tab. So it could go either way, depending on what the LLM thinks is better.