-
How can i use ollama running on remote server? Is there any way i can setup base url for ollama?
-
### Describe the Bug
I am running AI tagging with Ollama and I have set the timeout to INFERENCE_JOB_TIMEOUT_SEC=1200. This is not respected, if the query runs longer than 5 minutes, it just fails wi…
-
**_When launching a search in the webui, ollama API responses are working correctly for the generation steps, but not for the embeddings._**
1. `ollama serve`
2. `python -m uvicorn main:app --relo…
-
### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Describe the bug and reproduction steps
1. Use Docker command (with WSL) for setup for 0.14 with the p…
-
I love your project, I want to use it with local ollama+llava and i tried many way including asking chat gpt.
I am on Windows 11, i tried docker and no go. changed api address from settings in front…
-
When I download the gguf file of Qwen 2.5 from Hugging Face and deploy it as an LLM for LightRag through Ollama's modelfile, it always gets stuck at the last step, no matter how large or small my txt …
-
Traceback (most recent call last):
File "/home/abi/nfs/work_space/ollama/lightrag_ollama_demo.py", line 14, in
rag = LightRAG(
TypeError: LightRAG.__init__() got an unexpected keyword argume…
-
### Validations
- [X] I believe this is a way to improve. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue](https://githu…
-
Currently the only way to switch ollama model is to edit centralized config file (no option to pass config file with a flag). Ideally we would set an array of models in config and have a keybinding in…
-
### Issue
getting an error when trying to use Ollama locally:
```
Traceback (most recent call last):
File "/venv/lib/python3.10/site-packages/aider/coders/base_coder.py", line 1184, in send_me…