-
This will help you. In fact, qwen2-vl support object detection!
- https://github.com/QwenLM/Qwen2-VL/issues/9
- https://github.com/QwenLM/Qwen2-VL/issues/105
- https://github.com/QwenLM/Qwen2-VL/…
-
2BのVision Language Model。llama.cppでは動かないので、ONNXで動かしたい。
https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct
-
### System Info
qwen2-vl added new features of M-ROPE, please support it
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Task…
-
Qwen2-VL showed much better performance on multiple tasks. Will VLM2Vec try it?
-
### Model description
[Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct)
### Open source status
- [X] The model implementation is available
- [X] The model weights are …
-
When running the code below
`python3 -m lmms_eval \
--model=qwen2_vl \
--model_args pretrained="Qwen/Qwen2-VL-2B-Instruct",device_map=cuda \
--tasks=mmstar,chartqa \
--batch_size=…
-
SOTA light weight vision model
[https://github.com/QwenLM/Qwen2-VL](https://github.com/QwenLM/Qwen2-VL)
llama.cpp issue [#9246](https://github.com/ggerganov/llama.cpp/issues/9246)
-
**Describe the bug**
Qwen2-VL(7B)使用 app ui 时,无图片上传路径,只支持文本对话,使用 web-ui 部署启动成功,但是无法对话
**Your hardware and system info**
ms-swift 使用最新版本:2.4.2
-
Hello
Will it be possible to include support for Qwen2-VL model? Thank you
-
Great job, but I found the Qwen2-VL url is wrong. The correct url for Qwen2-VL is https://arxiv.org/pdf/2409.12191, but in your list is https://huggingface.co/papers/2409.05840.