QwenLM / Qwen2-VL

Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
Apache License 2.0
3k stars 176 forks source link

使用VLLM启动qwen2-vl系列模型均会出现报错“assert "factor" in rope_scaling”,望解答,谢谢! #247

Closed whitesay closed 1 month ago

whitesay commented 1 month ago

报错信息如下: File "/home/hibug/anaconda3/envs/shizhevl/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 132, in build_async_engine_client_from_engine_args if (model_is_embedding(engine_args.model, engine_args.trust_remote_code, File "/home/hibug/anaconda3/envs/shizhevl/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 73, in model_is_embedding return ModelConfig(model=model_name, File "/home/hibug/anaconda3/envs/shizhevl/lib/python3.10/site-packages/vllm/config.py", line 227, in init self.max_model_len = _get_and_verify_max_len( File "/home/hibug/anaconda3/envs/shizhevl/lib/python3.10/site-packages/vllm/config.py", line 1740, in _get_and_verify_max_len assert "factor" in rope_scaling AssertionError

版本(python3.10 cuda12.1): vllm 0.6.1 pypi_0 pypi vllm-flash-attn 2.6.1 pypi_0 pypi transformers 4.45.0.dev0 pypi_0 pypi transformers-stream-generator 0.0.4 pypi_0 pypi torch 2.4.0 pypi_0 pypi torchvision 0.19.0 pypi_0 pypi

whitesay commented 1 month ago

在vllm/config.py中把hf_config打印出来,发现没有正确载入设置的config.json文件,似乎使用了什么默认的??

fyabc commented 1 month ago

@whitesay 最新版本的transformers在加载Qwen2-VL config.json时存在bug,请按照README中的说明,安装特定版本:

pip install git+https://github.com/huggingface/transformers@21fac7abba2a37fae86106f87fcf9974fd1e3830
whitesay commented 1 month ago

谢谢

Jasonsey commented 1 month ago

@whitesay 最新版本的transformers在加载Qwen2-VL config.json时存在bug,请按照README中的说明,安装特定版本:

pip install git+https://github.com/huggingface/transformers@21fac7abba2a37fae86106f87fcf9974fd1e3830

如果已经安装了master版本的transformers, 那么使用这个方案可能不会安装成功。只需卸载transformers然后再执行这个命令即可

whitesay commented 1 month ago

嗯嗯,谢谢