-
### Checklist
- [X] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue y…
-
### System Info
- `transformers` version: 4.45.0.dev0
- Platform: macOS-14.6.1-arm64-arm-64bit
- Python version: 3.12.4
- Huggingface_hub version: 0.24.6
- Safetensors version: 0.4.5
- Acceler…
-
### Motivation
Is there a plan to support the deployment of Qwen2-VL
### Related resources
_No response_
### Additional context
_No response_
-
Qwen-VL ([ArXiv](https://arxiv.org/abs/2308.12966), [GitHub](https://github.com/QwenLM/Qwen-VL), [HugingFace](https://huggingface.co/Qwen/Qwen-VL)) shows very promising results on various tasks, would…
-
### Describe the bug
As the progress bar is loaded with the specified value, text that is added to ProgressBar, e.g.:
`283/500` could be stacked and rendered incorrectly :
Once the animation …
-
用 vllm 部署 Qwen2-VL-7B-Instruct,启用 prefix-caching 推理图像数据时报错 shape mismatch,prefix-caching + 纯文本数据不会报错,关闭 prefix-caching + 图像数据也不会报错
报错:
File "/opt/miniforge3/envs/vllm-qwen2-vl/lib/python3.10/site-…
-
安装教程,使用vllm出错,显卡H100 , 昨天晚上拉的最新镜像
1、no module 'Qwen2-7B-Instruct',
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model model_path
chat_response = …
-
### System Info
`from transformers import Qwen2VLForConditionalGeneration,AutoTokenizer, AutoProcessor`
then there have some problem:
`Traceback (most recent call last):
File "E:\codeProject\p…
-
```
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor, AutoConfig
from qwen_vl_utils import process_vision_info
import torch
model_name = "Qwen/Qwen2-VL-7B-I…
-
Hvis escape skal der ikke slås op i VL.
✔️