-
我已经打开了ollama服务,电脑windows设置了环境变量OLLAMA_HOST=0.0.0.0:11434。但是只要打开python -m openui和端口网页后,就会出现下面的报错
WARNING (openui): Couldn't connect to Ollama at https://api.groq.com/openai/v1
WARNING (openui): Cou…
-
Setting up `gptel` with ollama local models is not working
**Additional context**
Emacs version: 29
Operating system: OSX Sonoma
Ollama:
```bash
➜ ~ ollama list
NAME …
-
```python
from ragas.llms.prompt import Prompt
from ragas import evaluate
from ragas.metrics import (
answer_relevancy,
faithfulness,
context_recall,
context_precision,
)
from…
-
Description:
I’ve set up Ollama on my local machine and am trying to access it through a GUI hosted on a different system using ngrok. Both Ollama and the GUI are running on separate ngrok tunnels:…
-
**Is your feature request related to a problem? Please describe.**
No
**Describe the solution you'd like**
I have a home lab where I run Ollama backend as a docker container. I can expose it to t…
-
### What is the issue?
If Ollama and a Model are bound as a startup process, there is a potentiality to allow Ollama to utilize the swap memory on start and cause an incredibly slow system/system han…
-
## Temperature and seed parameters should be part of 'options'
According to [the docs](https://github.com/ollama/ollama/blob/main/docs/api.md#generate-a-chat-completion) temperature and seed should b…
-
### What is the issue?
After setting iGPU allocation to 16GB (out of 32GB) some models crash when loaded, while other mange.
```
ollama run llama3.2
Error: llama runner process has terminated: c…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](ht…
-
### What happened?
How can i config the Quivr to use local Ollama model ? i can't find the config example and no env.example file
### Relevant log output
```bash
How can i config the Quivr to use …