Open bash99 opened 2 days ago
same issue +1
the same
seems the issue is found:https://github.com/vllm-project/vllm/pull/8829
We have just now fixed the issue in https://github.com/vllm-project/vllm/pull/8837. Please install vLLM from source to resolve the config loading problem.
We have just now fixed the issue in vllm-project/vllm#8837. Please install vLLM from source to resolve the config loading problem.
vllm 依然不支持多张图片或视频的问答,请问是否有计划修复?
We have just now fixed the issue in vllm-project/vllm#8837. Please install vLLM from source to resolve the config loading problem.
vllm 依然不支持多张图片或视频的问答,请问是否有计划修复?
Multi-image input is currently supported in both offline and online inference, while video input is only supported for offline inference at the moment. If you need to pass videos via OpenAI API, you can instead provide multiple images for now. Please check the example in examples/openai_vision_api_client.py
(especially the part labelled "Multi-image input inference")
We have just now fixed the issue in https://github.com/vllm-project/vllm/pull/8837. Please install vLLM from source to resolve the config loading problem.
Can we get a .post0 release for this? Installing from source is a lot more difficult.
VLLM 0.6.2 had just released few hours ago, it said no support multi image inference with Qwen2-VL.
I've try it, but it require the newest transformer and automatic install it.
When I start it use follow script (worked with vllm 0.6.1)
it report error like
if I return to old transformer with
it report error like