-
### System Info
- TensorRT-LLM v0.8.0 (pinned to release commit)
- Nvidia A100
- Mistral-7B-Instruct-v0.2
- Using the CPP runner
- Installed with `pip install tensorrt_llm==0.8.0 --extra-index-ur…
-
Noticing a `Error: unmarshal: invalid character 'p' after top-level value` on `ollama run llava`
`client version is 0.1.22`
-
# Bug Report
## Description
**Bug Summary:**
The system prompt set at Settings -> General -> System prompt is being totally ignored lately (it has correctly worked in my setup previously). Also…
-
Hi,
when running ollama, it hangs up after a few times calling "generate".
It shows no error or something, justt hangs up for hours until it is killed manually.
Stopping and then restarting ollama…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing…
-
### What is the issue?
As I served my VL models, It can not work correctly.
Here. I tried the Minicpm-llama3-V-2.5, and convert it to GGUF format under the instruction from the official repository:…
-
Opening a new issue (see https://github.com/ollama/ollama/pull/2195) to track support for integrated GPUs. I have a AMD 5800U CPU with integrated graphics. As far as i did research ROCR lately does su…
-
I have no idea, I am just going to put it here ;p
I have executed pip install -r requirements.txt and pip install -r faster_whisper_requirements.txt [because I want everything to run locally].
Pla…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
### Describe the bug
```bash
(lmdeploy042) yuzail…
-
Now that OpenAI is adding voice and image to ChatGPT and will probably be the new norm, wouldn't it be a good idea for llama.cpp to also please add this to the roadmap? if possible?