infiniflow / ragflow

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
https://ragflow.io
Apache License 2.0
12.46k stars 1.22k forks source link

[Question]: Model Ollama cannot connect #333

Open ginisksam opened 3 months ago

ginisksam commented 3 months ago

Describe your problem

But LLM limited. Got Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow. Please assist. This software is very good and flexible for document split-chunk-semantic for embedding. Many thanks

KevinHuSh commented 3 months ago

Do you mean on the demo Website or locally deployed? If on the demo Website, 127.0.0.1 is a not accessible IP. Make sure the server deploying Ollama has an internet accessible IP address. If you deploy RAGFlow locally, make sure both Ollama and RAGFlow in the same LAN that can comunicate eachother. A correct Ollama IP and Port is the key.

ginisksam commented 3 months ago

locally deployed. The error is as flws:

hint : 102 Fail to access model(mistral).ERROR: [Errno 111] Connection refused

As you know Ollama is really popular now for local machine.

OK, I got your msg. IP on same LAN is key. Will try restart Ollama as root and try OLLAMA_HOST=0.0.0.0:11434 ollama serve

Thanks

shaoxinghua0623 commented 3 months ago

Hello, I got the same problem.

hint : 102 Fail to access model(qwen:14b).ERROR: [Errno 111] Connection refused

I modified Environment="OLLAMA_HOST=0.0.0.0" And when I try 0.0.0.0:11434 on browser, it showed ollama is running.

I couldn't add model on the web. Could you help me, thx.

image

image

image

image

mjiulee commented 3 months ago

感谢大神~~

尝试了下,ollama链接不成功,但是打开:否是否支持 Vision后,可以添加成功。

但是,在chat选项里面,没列初ollama的选项,应该是假象。 11

聊天配置,没得选刚才添加的ollama模型 微信截图_20240412125738

希望大神完善下。

shaoxinghua0623 commented 3 months ago

Yes, I got the same promble with you. @mjiulee

mjiulee commented 3 months ago

@shaoxinghua0623

我又尝试了遍,神奇的可以了。

就是把url换成你装ollama的服务器的ip地址就行,例如:http://192.168.0.100:11434

然后就ok了~

shaoxinghua0623 commented 3 months ago

@mjiulee

真的,谢谢兄弟!

matheospower commented 3 months ago

@shaoxinghua0623 I have the same issue on Ubuntu 22.04 . Did the above resolve your issue? If yes can you please help me how to find the appropriate IP for Ollama url?

shaoxinghua0623 commented 3 months ago

@matheospower you can use the command ifconfig on the terminal to find the IP of Ubuntu. And the ollama base url is http://`IP of your Ubuntu:11434. IP of your Ubuntu` is not 0.0.0.0 or 127.0.0.1

matheospower commented 3 months ago

@matheospower you can use the command ifconfig on the terminal to find the IP of Ubuntu. And the Ollama base url is http://`IP of your Ubuntu:11434.IP of your Ubuntu` is not 0.0.0.0 or 127.0.0.1

Thank you for the answer! Unfortunately, this did not resolve my problem. Not sure If I need to open a new issue but I will post it here.

My problem is that I get stuck in the pop-up to add an Ollama model. I tested the Ollama service (running), from outside and inside the ragflow-server with curl, and seems fine and can be reached. After setting the url in the pop-up and clicking ok, it is loading for some time and then gives me a connection time-out. Also I cannot see anything in the docker logs -f ragflow-server or the rag flow-logs directory.

If anyone had a similar issue or can give a hint on how to troubleshoot please let me know!

fredrousseau commented 3 months ago

Hi , same issue there .... I did test using http://host.docker.internal:11434/ as a base url ( that's probably the way to go specially in a docker deployment model) but I got an error "Hint 102 : Fail to access model(/mistral).ERROR: [Errno -2] Name or service not known " ...

fredrousseau commented 3 months ago

Found a way to solve that issue , I had to change my ollama settings to "Environment="OLLAMA_HOST=PRIVATEIP" to get it exposed ... it looks like if you listen on the loopback (127.0.0.1) ragflow is not able to reach it

ShawnHoo7256 commented 3 months ago

Solve this Problem:

  1. Make sure Ollama is OK.
  2. Config as follow: 截屏2024-04-13 12 26 32
ginisksam commented 3 months ago

Found a way to solve that issue , I had to change my ollama settings to "Environment="OLLAMA_HOST=PRIVATEIP" to get it exposed ... it looks like if you listen on the loopback (127.0.0.1) ragflow is not able to reach it

Kool. Will give it a try.

FINDINGS: Just discovered that my existing ollama working well with langchain is not at root level.

If I edit my ollama.service file and set Environment="OLLAMA_HOST=PRIVATEIP" and systemctl start ollama.service - In browser PRIVATEIP:11434 => ollama is running. Fine.

But in terminal - ollama list - all the models are missing!!! Case in point - Can ollama resides in root and user - and serve at root or user level separately at any one time? Will not affect each other? OS: Linux Mint 21.3 (newbie)

OmegAshEnr01n commented 3 months ago

Found a way to solve that issue , I had to change my ollama settings to "Environment="OLLAMA_HOST=PRIVATEIP" to get it exposed ... it looks like if you listen on the loopback (127.0.0.1) ragflow is not able to reach it

This didnt work for me either

hiwujie commented 3 months ago

If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:

Then in docker you need to replace that localhost part with host.docker.internal. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434/ you should put http://host.docker.internal:11434 into the connection URL in OLLAMA setup.

Important On linux http://host.docker.internal:xxxx does not work.

ganchun1130 commented 2 months ago

我可以解决这个问题! 很简单! 在基础URL这一栏里填写:例如:http://192.168.0.100:11434/v1 注意:一定要加v1,我猜测这是ragflow模仿OpenAI的调用格式,而且在ollama的官方调用OpenAI格式的服务时,也是加上了v1! 这样就可以添加模型了!

BooleanMind commented 2 months ago

If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:

  • localhost
  • 127.0.0.1
  • 0.0.0.0

Then in docker you need to replace that localhost part with host.docker.internal. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434/ you should put http://host.docker.internal:11434 into the connection URL in OLLAMA setup.

Important On linux http://host.docker.internal:xxxx does not work.

This suggestion is so crucial that it needs to be inserted in the /docs/ollama.md imho.

OmegAshEnr01n commented 2 months ago

How can I make it work on Linux lol.

gaspardpetit commented 2 months ago

If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:

  • localhost
  • 127.0.0.1
  • 0.0.0.0

Then in docker you need to replace that localhost part with host.docker.internal. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434/ you should put http://host.docker.internal:11434 into the connection URL in OLLAMA setup.

Important On linux http://host.docker.internal:xxxx does not work.

host.docker.internal can work on Linux if you modify the docker-compose.yml by adding extra_hosts like this:

    extra_hosts:
      - "host.docker.internal:host-gateway"

Once host-gateway is mapped to host.docker.internal, you should be able to refer to a ollama instance running on the same host as ragflow (but not within the docker-compose) by referring to is as http://host.docker.internal:11434/

tslyellow commented 3 weeks ago

您的意思是在演示网站上还是在本地部署?如果在演示网站上,127.0.0.1 是无法访问的 IP。确保部署 Ollama 的服务器具有可通过 Internet 访问的 IP 地址。如果在本地部署 RAGFlow,请确保 Ollama 和 RAGFlow 位于可以相互通信的同一 LAN。正确的 Ollama IP 和端口是关键。

Can't you use 'http://localhost:11434' to connect to ollama on the demo? You can only use 'http://localhost:11434' to connect to ollama after local deployment, is that right? If I want to add ollama3 to the demo, what is the best way?

tslyellow commented 3 weeks ago

http://host.docker.internal:11434

Hi, I ragflow via docker, ollama is in win local, I set the url to ' http://host.docker.internal:11434 ' and still get an error, do you know what's going on? If you can, can you help me out? 1719489963889

gaspardpetit commented 3 weeks ago

@tslyellow On Windows when running a Linux container in WSL, if you want to reach a port on the Windows host, you need to add --add-host=host.docker.internal:host-gateway to your docker command line, and target host.docker.internal (like you are doing above). If you are launching the container from docker compose, then see my post above about using extra_hosts.

If it still does not work, it may be that ollama, it bound to 127.0.0.0 by default, so the port may not be available outside of your loopback device. To instruct ollama to listen to all network devices (including the docker virtual network), you need to set the OLLAMA_HOST environment variable to 0.0.0.0. Note that this will also expose ollama to incoming traffic from outside your PC, so you may want to ensure that you have proper firewall settings in place. Alternatively, you may chose to bind ollama to your WLS IP, which can be found by running ipconfig.