-
### What happened?
export LLAMA_CUDA=1 # only if for NViDiA CUDA
export CUDA_DOCKER_ARCH=compute_86
make -j$(nproc) NVCC=/usr/local/cuda/bin/nvcc
./llama-llava-cli -m ./m2/moondream2-text-model-…
-
run
pytest -s tests/models/test_minicpmv.py::test_models[5-128-half-size_factors0-openbmb/MiniCPM-Llama3-V-2_5]
on top of
https://github.com/pytorch/pytorch/pull/133742
A simpler repo:
```
i…
-
### I keep getting allocation on device, i have tried removing my comfy arguments '--fast --normalram' and that didnt work. Im on 4060ti 16gb with ram: 32 gb
Loading Pixtral model: pixtral-12b-nf4…
-
### 训练脚本
#!/bin/bash
GPUS_PER_NODE=2
NNODES=1
NODE_RANK=0
MASTER_ADDR=localhost
MASTER_PORT=6001
MODEL="/root/MiniCPM-V/pretrained_weights/MiniCPM-V-2_6" # or openbmb/MiniCPM-V-2, openbmb/…
-
### 是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
- [X] 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions
### 该问题是否在FAQ中有解答? | Is there an existing ans…
-
用的cuda12.1,make -j过程中报错,整体安装过程如下:
(cuda12_1) root@I19359398490090128f:/hy-tmp# cd fastllm-master/
(cuda12_1) root@I19359398490090128f:/hy-tmp/fastllm-master# mkdir build
(cuda12_1) root@I19359398…
-
### Feature Name
MiniCPM-V2.0
### Feature Description
Research about MiniCPM-V2.0
### Research Findings
### MiniCPM-V2.0
**MiniCPM-V2.0** is a Chinese language model developed by the Beijing…
-
### Your current environment
The output of `vllm 0.5.5
vllm-flash-attn 2.6.1`
```text
Your output of `python collect_env.py` here
```
downloa…
-
训练脚本
```
--model_type minicpm-v-v2_5-chat \
--model_id_or_path /data/MiniCPM-V/pretrained/MiniCPM-Llama3-V-2_5 \
--dataset /data/swift/finetune/train_0703.jsonl \
--ddp_find_unused_pa…
-
**Describe the bug**
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
执行minicpm-V-2最佳实践,在推理时,输入prompt与图片路径后,程序不输出任何结果,也不报错.
![企业微信截图_17249886876962](https://githu…