-
I am running a windows instance of ollama it works with other ollama obsidian plugins. With BMO though it won't show me any models to select in the dropdown list, it's empty. The interface works in o…
-
I wonder if the `intell` profile is supported for linux. If not, any guidance to add support, is greatly appreciated and I can provide a PR part of this Issue.
My presumption is based on:
```
e…
-
Since Ollama supports more than 30 models, but doesn't have the one I need, and in addition, having already deployed another application using fastchat, can I encapsulate the interface and disguise it…
-
i did pip install -r requirements.txt successfully. now what? how do i run the app?
-
I am completely new to this as I just read about it on reddit. Can someone help me how to connect it to the host or what are the steps to follow for it to work properly as i don't know what to do.
-
I have Ollama instance set up on an external server and think it would be a great, easy fit for this type of add-on if I could hook it in. Opens up use of many more models with better hardware, if the…
-
# Bug Report
## Description
**Bug Summary:**
When importing and loading a large number of chat sessions into Open WebUI, the application experiences significant performance degradation, causing…
-
https://github.com/ollama/ollama
This is a service that deploys large models locally and will provide an API interface. When can it be integrated?
Here is the list of supported models: https://ollam…
-
It'd be cool if there was an optional integration in Omnivore that implemented one or more AI/LLM APIs:
- [`.gguf` file format](https://github.com/ggerganov/ggml/blob/master/docs/gguf.md)
- OSS i…
-
### What is the issue?
Ollama is failing to run on GPU instead it uses CPU. If I force it using `HSA_OVERRIDE_GFX_VERSION=9.0.0` then I get `Error: llama runner process has terminated: signal: abo…