-
### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Branch name
main
### Commit ID
ragflow 0.13.0
### Other environment information
```Markdown
Import…
-
Besides returning the list response, can it specify the gpu/cpu percentages? Figuring out how much of the model is loaded into GPU is not as clear cut as dividing the `size_vram` by vram size.
-
Hello !
I have some troubles with the embed model in the qa mode :
- [x] Disable all other plugins besides Copilot **(required)**
- [x] Screenshot of note + Copilot chat pane + dev console added *…
-
### Extension
https://www.raycast.com/massimiliano_pasquini/raycast-ollama
### Raycast Version
1.84.3
### macOS Version
15.0.1
### Description
I have been trying to connect my Ollama server (GC…
-
### Bug Description
- It does not connect to API at all ( not working even hardcoded )
- Workaround : Use LMStudio Embeddings component and route to /v1 ( work just fine with ollama even load avai…
-
How to integrate Ollama as the llm?
-
本地Ollama 如何获取API Key?
-
First, I love this so much. This is probably my favorite github project I've seen in a while. I've been using websearch on Open Web UI but this is so much better/faster/effective that it's crazy. G…
-
Hi, awesome project!
I'm on the doorstep of my first query, but I'm stuck.
This is the Ollama server API endpoint:
```bash
curl http://10.4.0.100:33821/api/version
{"version":"0.4.2"}
```
T…
-
ollama /set parameter num_ctx 4096
Kunnen we dit gebruiken wanneer we zoiets doen:
llm_con$parameters$num_ctx