jonfairbanks / local-rag

Ingest files for retrieval augmented generation (RAG) with open-source Large Language Models (LLMs), all without 3rd parties or sensitive data leaving your network.
GNU General Public License v3.0
449 stars 52 forks source link

Struggle to make it work - ollama url not saved #55

Closed HyperUpscale closed 1 month ago

HyperUpscale commented 1 month ago

Running the docker image...

I put: extra_hosts:

But still in the Ollama Endpoint the url is : "http://localhost:11434" When I manually edit it to: "http://host.docker.internal:11434" then it finds the models and seem to start working , but doesn't keep that setting.

🗂️ GitHub Repo

Select a GitHub.com repo

Processing...

✔️ LLM Initialized

✔️ Embedding Model Created

FileNotFoundError: [Errno 2] No such file or directory: '/home/appuser/data' Traceback: File "/home/appuser/utils/rag_pipeline.py", line 119, in rag_pipeline documents = llama_index.load_documents(save_dir) File "/home/appuser/utils/llama_index.py", line 99, in load_documents for file in os.scandir(data_dir):

image

I cant make it work.

jonfairbanks commented 1 month ago

Please read the troubleshooting guide. Also based on the directory in the stack trace, you are using Windows. Please use a Linux host if possible.