-
## Summary
Support any OpenAI compatible endpoints, such as tabbyAPI, vLLM, ollama, etc.
I am running Qwen2.5-coder 32B with [tabbyAPI](https://github.com/theroyallab/tabbyAPI) which is a OpenAI …
-
### What is the issue?
I am working in a multi-GPU environment. I set up multiple docker containers to assign each GPU to it so I can process my workload in parallel.
Here is the command I use …
-
# Description
Currently the Ollama configuration is setup to always use the llama3 model. The problem with this is that new models are coming out all the time, for instance llama 3.2 is currently a…
-
Hi.
Thank you for this cool server. I am developing an open source AI tool that is compatible with multiple services/models. And ollama is one of them. Except that I need to use it with multiple cl…
-
Could you document how to access Ollama API with bearer authentication using python?
-
### What is the issue?
After downloading and installing. Requires additional download of compiled Rocblas
Rocblas.dll overwrites the rocblas.dll that comes with the SDK, and puts rocblas.dll in the …
-
Hello!
I'm experiencing an issue connecting to the local Ollama server while using your page-assist extension. The application displays "Unable to connect to Ollama 🦙" message and a red icon.
I'…
-
### Steps To Reproduce
Steps to reproduce the behavior:
```
$ nix build github:nixos/nixpkgs/4d2f662676c79b6e30ec9e8f7a41236d5f883687#ollama
```
### Build log
https://gist.github.com/n8h…
-
(graphrag-ollama-local) root@autodl-container-49d843b6cc-10e9e2a3:~/graphrag-local-ollama# python -m graphrag.query --root ./ragtest --method global "What is machinelearning?"
INFO: Reading setti…
-
Hey, I just tried the Alpaca Flatpak, it works perfectly fine with small Models.
But whenever I try to download Models bigger than 6GB, the Progression Bar always stops.
Llama 3.1 models, as well as…