-
### Required prerequisites
- [X] I have searched the [Issue Tracker](https://github.com/PKU-Alignment/align-anything/issues) and [Discussions](https://github.com/PKU-Alignment/align-anything/discussi…
-
使用ollama运行minicpm-v模型,调用过程中发现,单独调用llm文字部分,正常运行到igpu。
但是同时使用图片和文字,会出先LLM运行到CPU上。
ollama run minicpm-v:latest
Test prompt
{
"model": "minicpm-v:latest",
"prompt": "图片讲了什么内容?",
"images":[…
-
git clone https://www.modelscope.cn/models/linglingdan/MiniCPM-V_2_6_awq_int4
用这个量化后的INT4模型推理,显存占用大概20G,和fp模型显存占用情况基本一样,请教下是不是量化存在问题?
-
Do vision models only support LLaVA-Phi-3-Mini? Do they support llava-v1.6-vicuna, llava-v1.6-mistral, llava-v1.5-13b, llava-v1.6-34b, and MiniCPM-V-2_6?
-
```
Traceback (most recent call last):
File "/home/li/LLM/Native-LLM-for-Android-main/Export_ONNX/MiniCPM/MiniCPM-1B/MiniCPM_Export.py", line 54, in
qkv_bias = torch.cat([layer_attn.q_proj.b…
-
### 起始日期 | Start Date
_No response_
### 实现PR | Implementation PR
在 https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf
这里指定的量化运行,需要指定的参数包括:
./llama-minicpmv-cli -m ../MiniCPM-V-2_6/model/ggml-mode…
-
### Model description
MiniCPM-V is a series of Openbmb's vision language models.
We want to add support for MiniCPM-V-2 and later models
### Open source status
- [x] The model implementation is av…
-
File "/root/ld/ld_project/pull_request/MiniCPM-V/web_demo_2.6.py", line 44, in
model = AutoModel.from_pretrained(model_path, trust_remote_code=True)
File "/root/ld/conda/envs/minicpm/lib/py…
-
I hope you're well. Please share your valuable insights. I really appreciate it.
I have a python function that sends images to lm studio and it works good when it is called from a tester code.
But…
-
https://github.com/OpenBMB/MiniCPM-V/blob/a209258d851f404485e5ae25864417dff3bb74ca/eval_mm/vlmevalkit/vlmeval/dataset/videomme.py Code says 8 frames are used for a video. But the leaderboard says (htt…