A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 30.67% tasks (pass@1) in SWE-bench lite and 38.40% tasks (pass@1) in SWE-bench verified with each task costs less than $0.7.
This issue arises because ollama.chat(model=self.name, messages=[]) invokes chat = _client.chat (located in site-packages/ollama/init.py), where _client = Client(). The Client() constructor defaults to 'http://localhost:11434', which, within a Docker container, refers to the container itself rather than the host machine, while I install ollama in the host.
To resolve this, I propose two options:
Update the README: Suggest that ollama should be installed within the same Docker container as the agent. This approach requires users to configure a GPU environment within the container if they wish to utilize GPU capabilities for running llama3, which might be cumbersome.
Host Installation with Custom Client Configuration: Recommend installing ollama on the host machine. Use client.chat where client = Client(host='http://host.docker.internal:11434'). Here, host.docker.internal points to the host within the Docker network.
I hope the maintainers acknowledge this issue. Considering that llama3 is a cost-effective option, its popularity is likely to increase, potentially affecting many users with this connectivity problem.
Thank you for reporting this. I was only testing with ollama and ACR both running in the host machine. I will patch it soon, likely in the second way you mentioned.
When testing the llama3 model and ollama, I encountered an error indicating that communication with the ollama server is unreachable:
This issue arises because
ollama.chat(model=self.name, messages=[])
invokeschat = _client.chat
(located in site-packages/ollama/init.py), where_client = Client()
. The Client() constructor defaults to 'http://localhost:11434', which, within a Docker container, refers to the container itself rather than the host machine, while I install ollama in the host.To resolve this, I propose two options:
Update the README: Suggest that ollama should be installed within the same Docker container as the agent. This approach requires users to configure a GPU environment within the container if they wish to utilize GPU capabilities for running llama3, which might be cumbersome.
Host Installation with Custom Client Configuration: Recommend installing ollama on the host machine. Use
client.chat
whereclient = Client(host='http://host.docker.internal:11434')
. Here,host.docker.internal
points to the host within the Docker network.I hope the maintainers acknowledge this issue. Considering that llama3 is a cost-effective option, its popularity is likely to increase, potentially affecting many users with this connectivity problem.