-
Hi, I tried to infer MiniCPM-V different images in loops. But got the same result.
```
images = ["a.jpg", "b.jpg", "c.jpg"]
for img in images:
image = Image.open(image_path).convert('RGB')
…
-
### Feature request / 功能建议
能否在2B模型当中加入ocr能力
-
Can't run on M1 Max [Macosx]
# ComfyUI Error Report
## Error Details
- **Node Type:** EncodeDiffusersOutpaintPrompt
- **Exception Type:** AssertionError
- **Exception Message:** Torch not com…
-
### What is the issue?
I have already downloaded qwen:7b, but when i run `ollama run qwen:7b`,got this error `Error: timed out waiting for llama runner to start:`, in the server.log have this msg `g…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
### What is the issue?
when i run quantified model on v0.1.37,is errors out `Error: llama runner process has terminated: exit status 0xc0000409`
first step:
```shell
>>> ollama create test_q8_0 …
-
Any methods that I can remove unwanted tokens from the tokenizer?
Referring to #4827 , I tried to remove tokens from the tokenizer with the following code.
First, I fetch the tokenizer from hug…
-
请问可以多卡部署吗?如果可以的话,具体怎么操作可以教教吗?一张3090显存不太够,对话稍微长一些就会爆显存,BMInf速度又太慢。
-
### Feature request / 功能建议
项目在README中宣称`MiniCPM-V 也首次跑通了多模态大模型在手机上的部署`可能不太严谨.
- [llama.cpp](https://github.com/ggerganov/llama.cpp#:~:text=GPT%2D2-,Multimodal%20models%3A,-Llava%201.5%20models)项目…
-
To Reproduce (on a linux machine with 1xA100 80GB):
1. I first create a new environment of python 3.10 with anaconda:
> conda create --name lmms python=3.10
> conda activate lmms
2. Install l…