-
### What is the issue?
I had ollama compiled from source and it worked fine. Recently I rebuild it to the last version, and it seems to not use my GPU anymore (it uses a lot of CPU processes, and it …
-
Hi
imho the context_length for ollama should be num_predict in llm_wrapper.py
```
def _ollama_generate(self, prompt, **kwargs):
url = f"{self.base_url}/api/generate"
data = …
-
As outlined in #4983, the ollama endpoint that Meilisearch has been using (`/api/embeddings`) is deprecated.
A new `/api/embed` endpoint was introduced in v0.3.4.
Switching to the new endpoint h…
-
CPU: i5-1335U
RAM: 16GB
OS: Ubuntu 22.04.10
Kernel: 6.8.0-45
logs:
```txt
2024/09/30 15:34:06 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBL…
-
**What problem or use case are you trying to solve?**
hi, i have a private cloud which i deployed ollama model.
but from the document: LLM_OLLAMA_BASE_URL="http://host.docker.internal:11434"
it me…
-
I don't know why but I'm encountering this problem with the library. Here I show my simple script:
```python
import ollama
client = ollama.Client(host=llm_config["base_url"], timeout=600)
clie…
-
### Description
Hi, I think you are calling the wrong endpoint for local embedding for ollama, if I use settings from your instructions [here](https://github.com/Cinnamon/kotaemon/blob/main/docs/loca…
-
# Problem Description
2024-11-09 14:01:12,107 - lightrag - INFO - Inserting XXX vectors to chunks
2024-11-09 14:02:22,145 - lightrag - INFO - Inserting XXX chunks Successfully
2024-11-09 14:02:22,1…
-
If you connect a locally-run OpenAI-compatible API, the models from it show up as external even if they are local. For example, models from LM Studio or Ollama's experimental OpenAI API.
This is one …
-
### Issue
When benchmarking ollama models in docker, I find I could not use the regular ollama api base url as in the documentation. What I needed was to use "http://host.docker.internal:11434" (MacO…