Open echzhai opened 2 weeks ago
@enyst apologies for the ping but this should in theory be possible right?
@echzhai Can you please tell, you're running with the docker command in the README? Can you open the UI, open Settings, and set the correct URL in Advanced Settings as "base url"?
The variable you point out, LLM_OLLAMA_BASE_URL
exists in order to get model info, while the URL used for generation is the base url
. It's a bit odd... but it should work. 🤔
i am currently try to install opendevin in k8s. so i follow this instrction: https://github.com/eng-waleed1/OpenDevin/commit/1ab7fe345e6e05d9a256668b20cc13cd9d4d92f1#diff-4a324bc9f354692336f5c151f18837bdce24876b14bd9c0e349e6af46cbef4b9R61 maybe the docker image in this example is too old? this image is: ghcr.io/opendevin/opendevin:0.7.1
Ah, I see. Yes, we're at 0.13.0, and there are very significant changes. For many reasons, I'd suggest to try this version, but I don't know if that guide works on it...
sorry. the newest version does not fit the examples to work in k8s. do you have plan to make it support for k8s?
What problem or use case are you trying to solve? hi, i have a private cloud which i deployed ollama model. but from the document: LLM_OLLAMA_BASE_URL="http://host.docker.internal:11434" it means the model need to be running locally.
when i change it to LLM_OLLAMA_BASE_URL=http://xxxx.remote-server.xxx:11343 it still try to connect localhost:11434. do you know if you can make it work with remote ollama server?
here is the log: File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 7308, in exception_type raise ServiceUnavailableError( litellm.exceptions.ServiceUnavailableError: litellm.ServiceUnavailableError: OllamaException: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/generate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f645490e9f0>: Failed to establish a new connection: [Errno 111] Connection refused'))