-
Ollama supports a "health" endpoint at the root of the running server:
```
❯ curl localhost:11434 -vvv
* Trying 127.0.0.1:11434...
* Connected to localhost (127.0.0.1) port 11434 (#0)
> GET /…
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to fin…
-
Currently, the Ollama connector implements its [own client](https://github.com/microsoft/semantic-kernel/tree/feature-connectors-ollama/dotnet/src/Connectors/Connectors.Ollama/Client).
Consider rep…
-
### Module
Ollama
### Testcontainers version
1.20.1
### Using the latest Testcontainers version?
Yes
### Host OS
Windows 11
### Host Arch
x86
### Docker version
Docker…
-
### Feature request
Hi,
I would like a feature to make it possible to cancel a pullModelStream , so Ollama stops downloading the model if we want to.
### Motivation
In my Ollama client, I want …
-
### What is the issue?
I have very slow downloads of models since I installed Ollama in Windows 11. No problems running models, etc. it's only the download speeds.
The terminal seems to report a …
-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a sim…
-
- [x] This is actually a bug report.
- [x] I have tried searching the documentation and have not found an answer.
**What Model are you using?**
Ollama 3.2
**Describe the bug**
When I use th…
-
### Before submitting your bug report
- [ ] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [ ] I'm not able to find an [open issue](ht…
-
### What is the issue?
When calling llava models from a REST client, setting temperature cause the ollama server hangs until process is killed.
### OS
Windows
### GPU
Nvidia
### CPU
AMD
### Ol…