-
# `ollama` raises `cudaSetDevice err: 35` on startup
### Describe the bug
`ollama` raises `cudaSetDevice err: 35` on startup and falls back to CPU. This is logged as an incompatibility between t…
-
### What is the issue?
(CodeLlama) developer@ai:~/PROJECTS/OllamaModelFiles$ ~/ollama/ollama run gemma-2-27b-it-Q8_0:latest
>>> Hello.
Error: POST predict: Post "http://127.0.0.1:42623/completion":…
-
The Documentation doesnt really show how to connect ollama to mindcraft
-
Ollama has some support for openai emulation and could be a nice way to support more tests without requiring an API key or a public endpoint. This might be a nice way to extend coverage of the openai,…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
i am going to use ollama with this, as i dont own openai key or anthropic key, i still added them to the env var in docker compose, but still asking for login via github, and even if i do try to login…
-
I already have many models downloaded for use with locally installed Ollama.
As my Ollama server is always running is there a way to get GPT4All to use models being served up via Ollama, or can I p…
-
Hey guys,
I've missed out the latest updates over the past 1-2 months. I remember update from v2.5 that we will remove `dspy.OllamaLocal` and all else to be replaced by `dspy.LM` which uses `litell…
-
-
# Description
Currently the Ollama configuration is setup to always use the llama3 model. The problem with this is that new models are coming out all the time, for instance llama 3.2 is currently a…