-
### Feature request
https://github.com/OpenBMB/MiniCPM is the smallest multimodal model available. The latest version, https://huggingface.co/openbmb/MiniCPM-V-2, appears to be able to understand G…
-
This is the best open source vision model i have ever tried , We need support for it in ollama
-
I tested the minicpm-v-llama3-2.5 model while keeping all the parameters the same as minicpm-v-2.0, but it's worse than minicpm-v-2.0.
-
Please support for MiniCPM-Llama3-V-2_5.
- HuggingFace Page: https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5
- Github : https://github.com/OpenBMB/MiniCPM-V
Currently I am using vllm 0.5.0.p…
-
### Model description
MiniCPM-Llama3-V 2.5
- built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters
- Strong OCR capabilities, multilingual support
- in several benchmarks on pa…
-
Hello, there are models available for MiniCPM-Llama3-V 2.5:
https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/blob/main/mmproj-model-f16.gguf
https://huggingface.co/openbmb/MiniCPM-Llama3-V-2…
ammyt updated
3 weeks ago
-
1. Download MiniCPM-V model from ModelScope
2. Convert the mode to low-bit by the command in GPU/ModelScope-Models/Save-Load as follows.
python ./generate.py --repo-id-or-model-path ./models/OpenBMB…
-
Thanks for your great work!
But I can't seem to find any training details or papers on the multimodal version of minicpm (minicpm-v 2.0). I wonder if they are not available yet or where they are.
-
I tried to leverage benchmark tool to test this mutimodal model, meet below error.
Model link: https://huggingface.co/openbmb/MiniCPM-V-2
Tool link: https://github.com/intel-analytics/ipex-llm/tree/…
-
训练脚本
```
--model_type minicpm-v-v2_5-chat \
--model_id_or_path /data/MiniCPM-V/pretrained/MiniCPM-Llama3-V-2_5 \
--dataset /data/swift/finetune/train_0703.jsonl \
--ddp_find_unused_pa…