-
Are you planning to include ollama support or would like me to try my hand at doing this for you?
Thank you for the implementation, btw. This is great! I've been doing this manually forever. A dedi…
-
It would be nice to have Ollama support.
-
How am i supposed to use Ollama with this?
-
Hey, very promising project! May this be run with a local Ollama instance in the future?
-
### Issue Description
Thanks for adding in Ollama support and an example. How would I set the URL to a remote Ollama instance?
-
Please add ability for ollama endpoints as well as lm studio endpoints
-
### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Branch name
main
### Commit ID
ragflow 0.13.0
### Other environment information
```Markdown
Import…
-
Besides returning the list response, can it specify the gpu/cpu percentages? Figuring out how much of the model is loaded into GPU is not as clear cut as dividing the `size_vram` by vram size.
-
I want to be able to use Gemma models when I'm offline. Add support for Ollama API.
-
Hello !
I have some troubles with the embed model in the qa mode :
- [x] Disable all other plugins besides Copilot **(required)**
- [x] Screenshot of note + Copilot chat pane + dev console added *…