-
-
### What is the issue?
The streamed chat-completion response from ollama's openai-compatible API repeats `"role": "assistant"` in all returned chunks. This is different to OpenAI's API which just has…
-
### Your current environment
The output of `python collect_env.py`
```text
Your output of `python collect_env.py` here
```
### Model Input Dumps
_No response_
### 🐛 Describe the bug
…
-
**Is your feature request related to a problem? Please describe.**
The doc refers to Ollama with the mixtral model.
**Describe the solution you'd like**
Update the doc.
**Describe alternativ…
-
Hello,
Thank you for the fantastic work on PaperQA. I’ve been able to use it to ask questions by providing over 100 papers as input, and I’ve been using only local models via Ollama. Everything is …
-
### Reminder
- [X] I have read the README and searched the existing issues.
### System Info
- `llamafactory` version: 0.9.1.dev0
- Platform: Linux-6.5.0-35-generic-x86_64-with-glibc2.35
- Pytho…
-
Currently, `evaluation.yaml` exists under the `configs/` directory. To start, we wanted to just showcase this recipes as an example, but it is a core part of the finetuning process and therefore shou…
-
**Describe the bug**
I am using self hosted TaskingAI Community v0.3.0.
I created assistant connected to llama3.2 and it works well. But when I attach any action or tool to the assistant, it sta…
-
**Describe the bug**
I am trying to run `promptfoo eval` command and getting the below error.
`API call error: Error: Request failed after 4 API call error: Error: Request failed after 4 retries:…
-
/bounty 100
definition of done:
- simple to use script (can be python, whatever) to fine tune model (LLM like llama3.2 or multimodal or OpenAI) on your screenpipe data
- some docs to run it an…