-
Thanks for adding support to VLM.
I was using [this](https://github.com/stanfordnlp/dspy/blob/main/examples/vlm/mmmu.ipynb) notebook.Tried with the `Qwen2-VL-7B-Instruct` and `Llama-3.2-11B-Vision-…
-
Add a few small models from HuggingFace.
https://huggingface.co/HuggingFaceTB/SmolLM2-1.7B-Instruct
-
It seems like qwen_vl_utils will occupy excessive memory when pre-process, until it get killed by system.
Traceback before it get killed:
```
File "/mnt/workspace/lmms-eval-main/lmms_eval/model…
-
### Model description
`model=Qwen/Qwen-7B-Chat
volume=/hub_models/Qwen-7B-Chat/
docker run --gpus=1 --shm-size 1g -p 8080:80 -v $volume:/data \
ghcr.nju.edu.cn/predibase/lorax:latest --model…
-
Hello,
I am receiving this error:
**_### "An error occurred: The checkpoint you are trying to load has model type `qwen2_vl` but Transformers does not recognize this architecture. This could be be…
-
I trying quantize [lightblue/qarasu-14B-chat-plus-unleashed](https://huggingface.co/lightblue/qarasu-14B-chat-plus-unleashed) based [Qwen/Qwen-14B-Chat](https://huggingface.co/Qwen/Qwen-14B-Chat) .
…
-
### How are you running AnythingLLM?
AnythingLLM desktop app
### What happened?
I chose Generic Openai settings in the LLM Provider settings and filled in Qwen's API .When chatting, the dialog box …
-
您好,使用原始代码在2张A100 80G上面微调qwen,显存占用两张卡上都只有919M,但是在数据加载过程中?内存占用一直在增加,直到180多G后内存爆了,程序终止。请问这个问题怎么解?
训练log:
![image](https://github.com/TideDra/VL-RLHF/assets/36758049/09277b55-ea0a-4cfd-875b-792f457441a2…
-
-
`
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
model = Qwen2VLForConditionalGeneration.from_…