Open yogesh-bansal opened 1 month ago
I'm seeing similar issues with langchain-core v0.2.29 and langchain-ollama v0.1.1. Looking at the library, I don't see base_url parameter being honored anywhere, and I've confirmed that curl works for my deployment as well.
I also tried setting OLLAMA_API_URL environment variable with no luck.
This issue seems like it could be related as well if it is an issue with how base_url parameter is being handled: https://github.com/langchain-ai/langchain/issues/25160
Also happening to me. I was able to do a sanity check against longchain_community code:
from langchain_community.llms.ollama import Ollama
model = Ollama(model="tinyllama", base_url="http://ollama:11434")
model.invoke("Hi there")
Works perfectly fine (also curl http://ollama:11434
), while OllamaLLM
refuses my connection with the same parameters.
Also happening to me. I was able to do a sanity check against longchain_community code:
from langchain_community.llms.ollama import Ollama model = Ollama(model="tinyllama", base_url="http://ollama:11434") model.invoke("Hi there")
Works perfectly fine (also
curl http://ollama:11434
), whileOllamaLLM
refuses my connection with the same parameters.
Thank you very much! I wasted many hours on this. I confirm that it works.
I am using ChatOllama. I am having the same issue. What would you recommend me doing? In the base_url I provide the http://ollama:11434.
In addition, this solution doesn't include the bind_tools functionality.
Checked other resources
Example Code
Error Message and Stack Trace (if applicable)
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "", line 1, in
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 346, in invoke
self.generate_prompt(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 703, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, kwargs)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 882, in generate
output = self._generate_helper(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 740, in _generate_helper
raise e
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 727, in _generate_helper
self._generate(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_ollama/llms.py", line 268, in _generate
final_chunk = self._stream_with_aggregation(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_ollama/llms.py", line 236, in _stream_with_aggregation
for stream_resp in self._create_generate_stream(prompt, stop, kwargs):
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_ollama/llms.py", line 186, in _create_generate_stream
yield from ollama.generate(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/ollama/_client.py", line 79, in _stream
with self._client.stream(method, url, **kwargs) as r:
File "/usr/lib/python3.10/contextlib.py", line 135, in enter
return next(self.gen)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 870, in stream
response = self.send(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 1015, in _send_single_request
response = transport.handle_request(request)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 232, in handle_request
with map_httpcore_exceptions():
File "/usr/lib/python3.10/contextlib.py", line 153, in exit
self.gen.throw(typ, value, traceback)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 111] Connection refused
Description
I have the following code inside a docker container python script I am trying to run. While the embedding model works fine, The LLM model returns Connection refused
Both works fine from outside the container though and inside the container as well when run through say curl
I have checked the model names etc and they are correct and since it works outside the python langchain environment. The issue appears when the OllamaLLM is run inside container environment.
I have attached the Dockerfile, Cleaned it out for reproducing the issue. Attaching to docker with
docker run -it image bash
to run the python code and the error appearsSystem Info
pip freeze | grep langchai langchain==0.2.12 langchain-chroma==0.1.2 langchain-community==0.2.11 langchain-core==0.2.28 langchain-experimental==0.0.64 langchain-ollama==0.1.1 langchain-text-splitters==0.2.2