-
## Issue
* When deploying the image to docker hub, there is no `BOT_TOKEN` or `GUILD_ID` provided for the image to be usable.
* Usage is not interrupted, but the docker hub is not usable unless some…
-
### What is the issue?
When using different context sizes (`num_ctx`) with the Ollama embedding model, I noticed big differences in the cosine similarity of the embeddings. Specifically, when I set t…
-
**Is your feature request related to a problem? Please describe:**
Hi, I modified `.env.local` with:
```bash
# You only need this environment variable set if you want to use oLLAMA models
#EXAMPLE…
-
I am running this example line-for-line:
cookbook/providers/ollama_tools/knowledge.py
but it throws an error:
ERROR Error processing document 'ThaiRecipes': The api_key client option must …
-
so i get an error sometimes when the bot try to answer:
[OllamaAPI-ERR] CAUGHT FAULT!
Traceback (most recent call last):
File "/home/orangepi/ollama-telegram/bot/run.py", line 270, in ollama_re…
-
The [ollamaclient](https://github.com/tmc/langchaingo/tree/main/llms/ollama/internal/ollamaclient) package is no more up to date.
We should remove it and use directly the official [ollama client](htt…
-
from paperqa import Settings, ask
import os
os.environ["OPENAI_API_KEY"] = "EMPTY"
local_llm_config = {
"model_list": [
{
"model_name": "ollama/llama3",
"litellm_params": {
"model": "ollama/ll…
-
**Describe the bug**
Ask same QUESTION at Training Data return error
**To Reproduce**
Steps to reproduce the behavior:
1. Go to 'Training Data'
2. Copy one of the 'QUESTION'
3. Click on 'New q…
-
The docs mention it but never give an example on how to run it using local inference. The only mention is the openai compatible api but that it doesnt support all of the functions.
-
### System Info
N/A
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### 🐛 Describe the bug
llama-stack-client: command not found which mentioned in https://github…