infiniflow / ragflow

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
https://ragflow.io
Apache License 2.0
20.89k stars 2.05k forks source link

[Question]: Ollma local model integration #2050

Open kranthicdac opened 2 months ago

kranthicdac commented 2 months ago

Describe your problem

Hi Team,

We're working on integrating the Ollama model deployed locally. While querying the local Ollama gives us the correct responses, we're facing an issue when configuring the base URL in the 'Add Model' tab on the UI—it’s not working. Has anyone tested this feature before? We followed the instructions in the link below to integrate the local models.

https://ragflow.io/docs/dev/deploy_local_llm

saineshwar commented 2 months ago

You need to host Ollama on Server and then you can use the URL of it.

kranthicdac commented 2 months ago

we did the same, still its not working . is it working for you ? can you share detailed steps.

saineshwar commented 2 months ago

After hosting you can access it in the browser. Use this URL while configuration.

image

kranthicdac commented 2 months ago

42ee8a5e-14c9-4b3b-b71a-515704836de8

i'm getting connection refused, eventhough my ollama server is running.

saineshwar commented 2 months ago

Localhost will not work. Host it on Some Server then on it will work.

ThierryHenry1994 commented 2 months ago

You need to host Ollama on Server and then you can use the URL of it.

I met the same problem, and I solve the problem by below :1.you can check your ollama conf(/etc/systemd/system/ollama.service) 2.add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3. reload config:systemctl daemon-reload systemctl restart ollama

sergiomaciel commented 2 months ago

You need to host Ollama on Server and then you can use the URL of it.

I met the same problem, and I solve the problem by below : 1 - .you can check your ollama conf(/etc/systemd/system/ollama.service) 2 - .add Environment="OLLAMA_HOST=0.0.0.0" in Service tab 3 - . reload config:systemctl daemon-reload systemctl restart ollama

4- Download the model to the server. In the sel server ollama terminal you must execute 'ollama pull llama3.1'. 5- Configure RagFlow with the model type, model name and the ollama server url.