-
# Description
Currently the Ollama configuration is setup to always use the llama3 model. The problem with this is that new models are coming out all the time, for instance llama 3.2 is currently a…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
llama.cpp support multiple adapters, see https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md
why ollama support only one adapter?
https://github.com/ollama/ollama/blob/659…
-
So as the title said, I have Ollama installed on my machine, but when tried to run `bun scripts/pre_build.js ` the scripts is redownloading Ollama.
Shouldn't it scan for any installed package? or t…
-
**Is your feature request related to a problem? Please describe.**
Currently seems only OpenAI llms support with tools option. Would be nice to have same for Ollama. Models like llama3.1 already supp…
0x366 updated
2 months ago
-
I want to ask a question that is not very advanced. Please forgive my ignorance.
Can Co-Storm be instantiated using Ollama+Serper?
Because this is not a bug, I did not apply the template. Please f…
-
### ⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
### Which Operating System are you using?
Linux
### Which versio…
-
Command should be this apparently:
`$env:OLLAMA_ORIGINS="app://obsidian.md*"; ollama serve`
-
On Windows, I am getting the following error.
```cmp
Error: [Errno 11001] getaddrinfo failed
```
Full error:
```cmp
(venv) PS D:\Users\mike\Documents\03_work\0_python\1_sandbox\Ollama_crap…
-
Hi,
I wanted to give this a try and installed ollama locally. I am able to use the ollama API on http://localhost:11434/api/generate with curl.
I evaluated `export OLLAMA_API_BASE=http://localhost:…