-
Are you planning to include ollama support or would like me to try my hand at doing this for you?
Thank you for the implementation, btw. This is great! I've been doing this manually forever. A dedi…
-
### What problem does the new feature solve?
Add Ollama support for vectorizer
### What does the feature do?
Allows to use self hosted Ollama instance to process embeddings.
### Implementation cha…
-
It would be nice to have Ollama support.
-
Hey, very promising project! May this be run with a local Ollama instance in the future?
-
### Issue Description
Thanks for adding in Ollama support and an example. How would I set the URL to a remote Ollama instance?
-
How am i supposed to use Ollama with this?
-
windows 10 运行ollama demo,ollama ==0.4.2, python==3.9
![image](https://github.com/user-attachments/assets/30ec8349-b6bb-4d9b-80d2-0a771cd3bf61)
提示错误:
```
INFO:lightrag:[Entity Extraction]...
…
-
Please add ability for ollama endpoints as well as lm studio endpoints
-
### Bug Description
Im trying out the RAG-flow and instead of OpenAI embedding and generation, im using Ollama. The Ollama-embedding part keeps giving the error:
```
Error Building Component
…
-
Besides returning the list response, can it specify the gpu/cpu percentages? Figuring out how much of the model is loaded into GPU is not as clear cut as dividing the `size_vram` by vram size.