-
As outlined in #4983, the ollama endpoint that Meilisearch has been using (`/api/embeddings`) is deprecated.
A new `/api/embed` endpoint was introduced in v0.3.4.
Switching to the new endpoint h…
-
**What problem or use case are you trying to solve?**
hi, i have a private cloud which i deployed ollama model.
but from the document: LLM_OLLAMA_BASE_URL="http://host.docker.internal:11434"
it me…
-
CPU: i5-1335U
RAM: 16GB
OS: Ubuntu 22.04.10
Kernel: 6.8.0-45
logs:
```txt
2024/09/30 15:34:06 routes.go:1125: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBL…
-
I don't know why but I'm encountering this problem with the library. Here I show my simple script:
```python
import ollama
client = ollama.Client(host=llm_config["base_url"], timeout=600)
clie…
-
### Description
Hi, I think you are calling the wrong endpoint for local embedding for ollama, if I use settings from your instructions [here](https://github.com/Cinnamon/kotaemon/blob/main/docs/loca…
-
Hi Team,
I am already using LMStudio and OLLAMA for model deplyments. Given this model is LMCPP compatible and uses that. How can this model be deplyment, hosted and used with LMStudio or OLLAMA. It …
-
### Describe the bug
The program seems to have a bug where it doesn't know which model you've stated you want to use. I've tried Llama 3.2 with Ollama and Gemini, but when trying to run either of the…
-
## Description
When using plugin with an LLM model running on Ollama server hosted locally (e.g., on another server within the same local network), the plugin successfully connects to the Ollama AP…
-
# Problem Description
2024-11-09 14:01:12,107 - lightrag - INFO - Inserting XXX vectors to chunks
2024-11-09 14:02:22,145 - lightrag - INFO - Inserting XXX chunks Successfully
2024-11-09 14:02:22,1…
-
When the api is called as in the example:
https://github.com/ollama/ollama/blob/main/docs/api.md#chat-request-with-tools
But the stream is enabled (stream: true)
The response doesn't contain to…