Open ginisksam opened 3 months ago
Do you mean on the demo Website or locally deployed? If on the demo Website, 127.0.0.1 is a not accessible IP. Make sure the server deploying Ollama has an internet accessible IP address. If you deploy RAGFlow locally, make sure both Ollama and RAGFlow in the same LAN that can comunicate eachother. A correct Ollama IP and Port is the key.
locally deployed. The error is as flws:
hint : 102 Fail to access model(mistral).ERROR: [Errno 111] Connection refused
As you know Ollama is really popular now for local machine.
OK, I got your msg. IP on same LAN is key. Will try restart Ollama as root and try OLLAMA_HOST=0.0.0.0:11434 ollama serve
Thanks
Hello, I got the same problem.
hint : 102 Fail to access model(qwen:14b).ERROR: [Errno 111] Connection refused
I modified Environment="OLLAMA_HOST=0.0.0.0" And when I try 0.0.0.0:11434 on browser, it showed ollama is running.
I couldn't add model on the web. Could you help me, thx.
感谢大神~~
尝试了下,ollama链接不成功,但是打开:否是否支持 Vision后,可以添加成功。
但是,在chat选项里面,没列初ollama的选项,应该是假象。
聊天配置,没得选刚才添加的ollama模型
希望大神完善下。
Yes, I got the same promble with you. @mjiulee
@mjiulee
真的,谢谢兄弟!
@shaoxinghua0623 I have the same issue on Ubuntu 22.04 . Did the above resolve your issue? If yes can you please help me how to find the appropriate IP for Ollama url?
@matheospower
you can use the command ifconfig
on the terminal to find the IP of Ubuntu.
And the ollama base url is http://`IP of your Ubuntu:11434.
IP of your Ubuntu` is not 0.0.0.0 or 127.0.0.1
@matheospower you can use the command
ifconfig
on the terminal to find the IP of Ubuntu. And the Ollama base url is http://`IP of your Ubuntu:11434.
IP of your Ubuntu` is not 0.0.0.0 or 127.0.0.1
Thank you for the answer! Unfortunately, this did not resolve my problem. Not sure If I need to open a new issue but I will post it here.
My problem is that I get stuck in the pop-up to add an Ollama model. I tested the Ollama service (running), from outside and inside the ragflow-server with curl, and seems fine and can be reached. After setting the url in the pop-up and clicking ok, it is loading for some time and then gives me a connection time-out. Also I cannot see anything in the docker logs -f ragflow-server
or the rag flow-logs directory.
If anyone had a similar issue or can give a hint on how to troubleshoot please let me know!
Hi , same issue there .... I did test using http://host.docker.internal:11434/ as a base url ( that's probably the way to go specially in a docker deployment model) but I got an error "Hint 102 : Fail to access model(/mistral).ERROR: [Errno -2] Name or service not known " ...
Found a way to solve that issue , I had to change my ollama settings to "Environment="OLLAMA_HOST=PRIVATEIP" to get it exposed ... it looks like if you listen on the loopback (127.0.0.1) ragflow is not able to reach it
Solve this Problem:
Found a way to solve that issue , I had to change my ollama settings to "Environment="OLLAMA_HOST=PRIVATEIP" to get it exposed ... it looks like if you listen on the loopback (127.0.0.1) ragflow is not able to reach it
Kool. Will give it a try.
FINDINGS: Just discovered that my existing ollama working well with langchain is not at root level.
If I edit my ollama.service file and set Environment="OLLAMA_HOST=PRIVATEIP" and systemctl start ollama.service - In browser PRIVATEIP:11434 => ollama is running. Fine.
But in terminal - ollama list - all the models are missing!!! Case in point - Can ollama resides in root and user - and serve at root or user level separately at any one time? Will not affect each other? OS: Linux Mint 21.3 (newbie)
Found a way to solve that issue , I had to change my ollama settings to "Environment="OLLAMA_HOST=PRIVATEIP" to get it exposed ... it looks like if you listen on the loopback (127.0.0.1) ragflow is not able to reach it
This didnt work for me either
If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:
Then in docker you need to replace that localhost part with host.docker.internal. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434/ you should put http://host.docker.internal:11434 into the connection URL in OLLAMA setup.
Important On linux http://host.docker.internal:xxxx does not work.
我可以解决这个问题! 很简单! 在基础URL这一栏里填写:例如:http://192.168.0.100:11434/v1 注意:一定要加v1,我猜测这是ragflow模仿OpenAI的调用格式,而且在ollama的官方调用OpenAI格式的服务时,也是加上了v1! 这样就可以添加模型了!
If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:
- localhost
- 127.0.0.1
- 0.0.0.0
Then in docker you need to replace that localhost part with host.docker.internal. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434/ you should put http://host.docker.internal:11434 into the connection URL in OLLAMA setup.
Important On linux http://host.docker.internal:xxxx does not work.
This suggestion is so crucial that it needs to be inserted in the /docs/ollama.md imho.
How can I make it work on Linux lol.
If you are in docker and cannot connect to a service running on your host machine running on a local interface or loopback:
- localhost
- 127.0.0.1
- 0.0.0.0
Then in docker you need to replace that localhost part with host.docker.internal. For example, if running Ollama on the host machine, bound to http://127.0.0.1:11434/ you should put http://host.docker.internal:11434 into the connection URL in OLLAMA setup.
Important On linux http://host.docker.internal:xxxx does not work.
host.docker.internal
can work on Linux if you modify the docker-compose.yml by adding extra_hosts
like this:
extra_hosts:
- "host.docker.internal:host-gateway"
Once host-gateway
is mapped to host.docker.internal
, you should be able to refer to a ollama instance running on the same host as ragflow (but not within the docker-compose) by referring to is as http://host.docker.internal:11434/
您的意思是在演示网站上还是在本地部署?如果在演示网站上,127.0.0.1 是无法访问的 IP。确保部署 Ollama 的服务器具有可通过 Internet 访问的 IP 地址。如果在本地部署 RAGFlow,请确保 Ollama 和 RAGFlow 位于可以相互通信的同一 LAN。正确的 Ollama IP 和端口是关键。
Can't you use 'http://localhost:11434' to connect to ollama on the demo? You can only use 'http://localhost:11434' to connect to ollama after local deployment, is that right? If I want to add ollama3 to the demo, what is the best way?
Hi, I ragflow via docker, ollama is in win local, I set the url to ' http://host.docker.internal:11434 ' and still get an error, do you know what's going on? If you can, can you help me out?
@tslyellow On Windows when running a Linux container in WSL, if you want to reach a port on the Windows host, you need to add --add-host=host.docker.internal:host-gateway
to your docker command line, and target host.docker.internal
(like you are doing above). If you are launching the container from docker compose
, then see my post above about using extra_hosts
.
If it still does not work, it may be that ollama
, it bound to 127.0.0.0
by default, so the port may not be available outside of your loopback device. To instruct ollama
to listen to all network devices (including the docker virtual network), you need to set the OLLAMA_HOST
environment variable to 0.0.0.0
. Note that this will also expose ollama
to incoming traffic from outside your PC, so you may want to ensure that you have proper firewall settings in place. Alternatively, you may chose to bind ollama
to your WLS IP, which can be found by running ipconfig
.
Describe your problem
But LLM limited. Got Ollama - Mistral instance running at 127.0.0.1:11434 but cannot add Ollama as model in RagFlow. Please assist. This software is very good and flexible for document split-chunk-semantic for embedding. Many thanks