-
### Describe the bug
I have Ollama installed in Windows 11 24H2, default port 11434.
```
ollama list
NAME ID SIZE MODIFIED
opencoder-extra:8b …
-
{
"hub-mirror": [
"ollama/ollama:0.4.0-rc5"
]
}
-
Judging from the videos and docs this is a great plugin which i will surely use in some way (didn't actually use it yet). I'm a heavy indesign user, and the possibilities for amending content seems in…
-
the architecture error when creating models or converting to gguf with llama.cpp
it look like the model has the same architecture as the llama3.2V, could you help me with this?
-
### Describe the bug
### My setup details
Ollama + OpenWebUI container and the Bolt container are both running on the same ubuntu VM host.
### Error message
When I browse to bolt from any mac…
-
How to call remote ollama
How to call ollama service through URL
-
Currently the Export AI feature works only if the user has an OpenAI key. It would be nice to have support for Ollama as fallback if the OpenAI key is not present.
-
most of times responses are empty when using continuously with ollama. (fp16)
-
Don't you get Triplex for free via OLLAMA?
https://ollama.com/sciphi/triplex
-
Here is a trace from my Intel Arc A770 via Docker:
```
$ ollama run deepseek-coder-v2
>>> write fizzbuzz
"""""""""""""""""""""""""""""""
```
And here is an trace from Arch linux running on …