-
### Model description
MiniCPM-V is a series of Openbmb's vision language models.
We want to add support for MiniCPM-V-2 and later models
### Open source status
- [x] The model implementation is av…
-
This model is the most powerful multi-modal model I have tried so far. It has a large number of users. However, it is not currently supported by ollama.
-
### Feature request
https://github.com/OpenBMB/MiniCPM is the smallest multimodal model available. The latest version, https://huggingface.co/openbmb/MiniCPM-V-2, appears to be able to understand G…
-
This is the best open source vision model i have ever tried , We need support for it in ollama
-
I tested the minicpm-v-llama3-2.5 model while keeping all the parameters the same as minicpm-v-2.0, but it's worse than minicpm-v-2.0.
-
Please support for MiniCPM-Llama3-V-2_5.
- HuggingFace Page: https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5
- Github : https://github.com/OpenBMB/MiniCPM-V
Currently I am using vllm 0.5.0.p…
-
Hello, there are models available for MiniCPM-Llama3-V 2.5:
https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/blob/main/mmproj-model-f16.gguf
https://huggingface.co/openbmb/MiniCPM-Llama3-V-2…
ammyt updated
1 month ago
-
https://github.com/OpenBMB/MiniCPM-V
-
I tried to leverage benchmark tool to test this mutimodal model, meet below error.
Model link: https://huggingface.co/openbmb/MiniCPM-V-2
Tool link: https://github.com/intel-analytics/ipex-llm/tree/…
-
训练脚本
```
--model_type minicpm-v-v2_5-chat \
--model_id_or_path /data/MiniCPM-V/pretrained/MiniCPM-Llama3-V-2_5 \
--dataset /data/swift/finetune/train_0703.jsonl \
--ddp_find_unused_pa…