Closed robbiemu closed 2 weeks ago
i think this is possible we just need instructions on how to do it
I'm not super sharp with Docker but... The Ollama port it is trying to contact is 11434. This is how you could expose that port, I believe: docker run -d -p 11434:11434 <langgraph_studio_image>
The -p flag has no equivalent inside of the Dockerfile, so it can't be moved there. You can do the same port forwarding if it was a docker compose instead, using the "Ports" section.
I have no idea how to add configuration options to the langgraph.json (for decoupling), but I'd be happy to help if I could :)
it just dawned on me what the real problem is. in my code, I am not specifying the hostname. Localhost in the container refer's to the virtual machine represented by the container. Docker provides a special hostname host.docker.internal
that can be used to refer to the host machine from within the container.
llm = ChatOllama(
model = "mistral-nemo:12b-instruct-2407-q8_0",
base_url="http://host.docker.internal:11434",
temperature = 0.8,
num_ctx=8192,
num_predict = 4096,
)
That works! This was a non-issue I guess :) And maybe, could just use a note in documentation somewhere -- or hopefully at least someone can search here and find the answer.
Please feel free to close this issue, for my part I have a valid solution.
I ran into the same problem / had the same use case. Thanks so much for figuring this out, @robbiemu ! 🥳
Using the base_url="http://host.docker.internal:11434"
for local Ollama models might be worth adding to an FAQ / README section.
Thoughts, @hwchase17? 🙇
base_url="http://host.docker.internal:11434"
did not work for me. This only difference is that I am using Orbstack. Could that be the issue?
Update: Latest version of Orbstack and LangGraph Studio did the trick
Came here for same problem. Yes - because it runs in Docker you need to adjust the base_url:
model = ChatOllama(
model=...,
base_url="http://host.docker.internal:11434"
)
Updated in README.md!
I am using the studio in module 2 of the Intro to LangGraph from Langchain Academy. (The same issue occurs with all models relying on a model if that model is set to ChatOllama):
I've modified requirements.txt to remove langchain-openai and add langchain-ollama:
I'm aware of no changes needed in the langraph.json:
in the graph I've swapped in my local model instead of openai:
This studio is a reproduction of the coursework in the Jupyter notebook, and there, I receive no issues. In the studio, however, when I submit a message, I get
"Connect Error: [Errno 111] Connection refused"
Expected:
If opening this port for the chain in docker is configurable in the json file, it should be required even for OpenAI models -- otherwise you are tightly coupling the studio to OpenAI. The outbound ports should either always be explicit, or every chat* model in langchain should have its default port open.
note: see also the original issue I opened at the GitHub for langchain academy