QwenLM / Qwen2-VL

Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
Apache License 2.0
2.6k stars 146 forks source link

按照要求安装transformers和vllm后使用vllm部署报错 #348

Open xuyifan-0731 opened 3 days ago

xuyifan-0731 commented 3 days ago

你好!我的机器是h800,我按照repo中进行了如下按照步骤:

pip install git+https://github.com/huggingface/transformers@21fac7abba2a37fae86106f87fcf9974fd1e3830 accelerate
pip install qwen-vl-utils
CUDA_VERSION=cu123
pip install 'vllm==0.6.1' --extra-index-url https://download.pytorch.org/whl/${CUDA_VERSION}

但当我运行python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /workspace。。。的时候,仍然出现了报错:

 File "/workspace/xuyifan/miniconda3/envs/llama-qwen2vl/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 498, in run_server
    async with build_async_engine_client(args) as async_engine_client:
  File "/workspace/xuyifan/miniconda3/envs/llama-qwen2vl/lib/python3.11/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/xuyifan/miniconda3/envs/llama-qwen2vl/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 110, in build_async_engine_client
    async with build_async_engine_client_from_engine_args(
  File "/workspace/xuyifan/miniconda3/envs/llama-qwen2vl/lib/python3.11/contextlib.py", line 210, in __aenter__
    return await anext(self.gen)
           ^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/xuyifan/miniconda3/envs/llama-qwen2vl/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 132, in build_async_engine_client_from_engine_args
    if (model_is_embedding(engine_args.model, engine_args.trust_remote_code,
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/xuyifan/miniconda3/envs/llama-qwen2vl/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 73, in model_is_embedding
    return ModelConfig(model=model_name,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/xuyifan/miniconda3/envs/llama-qwen2vl/lib/python3.11/site-packages/vllm/config.py", line 227, in __init__
    self.max_model_len = _get_and_verify_max_len(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/workspace/xuyifan/miniconda3/envs/llama-qwen2vl/lib/python3.11/site-packages/vllm/config.py", line 1740, in _get_and_verify_max_len
    assert "factor" in rope_scaling
           ^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError

我也尝试了从https://github.com/fyabc/vllm安装vllm,但是安装过程中报错:

          dist.run_commands()                                                                                                                       [638/1785]
        File "/workspace/xuyifan/miniconda3/envs/llama-0924/lib/python3.11/distutils/dist.py", line 966, in run_commands                                      
          self.run_command(cmd)                                                                                                                               
        File "/tmp/pip-build-env-31q89vzo/overlay/lib/python3.11/site-packages/setuptools/dist.py", line 950, in run_command                                  
          super().run_command(command)                                                                                                                        
        File "/workspace/xuyifan/miniconda3/envs/llama-0924/lib/python3.11/distutils/dist.py", line 985, in run_command                                       
          cmd_obj.run()                                                                                                                                       
        File "/tmp/pip-build-env-31q89vzo/overlay/lib/python3.11/site-packages/setuptools/command/editable_wheel.py", line 138, in run                        
          self._create_wheel_file(bdist_wheel)                                                                                                                
        File "/tmp/pip-build-env-31q89vzo/overlay/lib/python3.11/site-packages/setuptools/command/editable_wheel.py", line 341, in _create_wheel_file         
          files, mapping = self._run_build_commands(dist_name, unpacked, lib, tmp)                                                                            
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^                                                                            
        File "/tmp/pip-build-env-31q89vzo/overlay/lib/python3.11/site-packages/setuptools/command/editable_wheel.py", line 264, in _run_build_commands        
          self._run_build_subcommands()                                                                                                                       
        File "/tmp/pip-build-env-31q89vzo/overlay/lib/python3.11/site-packages/setuptools/command/editable_wheel.py", line 291, in _run_build_subcommands     
          self.run_command(name)                                                                                                                              
        File "/workspace/xuyifan/miniconda3/envs/llama-0924/lib/python3.11/distutils/cmd.py", line 313, in run_command                                        
          self.distribution.run_command(command)                                                                                                              
        File "/tmp/pip-build-env-31q89vzo/overlay/lib/python3.11/site-packages/setuptools/dist.py", line 950, in run_command                                  
          super().run_command(command)                                                                                                                        
        File "/workspace/xuyifan/miniconda3/envs/llama-0924/lib/python3.11/distutils/dist.py", line 985, in run_command                                       
          cmd_obj.run()                                                                                                                                       
        File "/tmp/pip-build-env-31q89vzo/overlay/lib/python3.11/site-packages/setuptools/command/build_ext.py", line 98, in run                              
          _build_ext.run(self)                                                                                                                                
        File "/workspace/xuyifan/miniconda3/envs/llama-0924/lib/python3.11/distutils/command/build_ext.py", line 340, in run                                  
          self.build_extensions()                                                                                                                             
        File "<string>", line 229, in build_extensions                                                                                                        
        File "<string>", line 211, in configure                                                                                                               
        File "/workspace/xuyifan/miniconda3/envs/llama-0924/lib/python3.11/subprocess.py", line 413, in check_call                                            
          raise CalledProcessError(retcode, cmd)                                                                                                              
      subprocess.CalledProcessError: Command '['cmake', '/workspace/xuyifan/vllm-add_qwen2_vl_new', '-G', 'Ninja', '-DCMAKE_BUILD_TYPE=RelWithDebInfo', '-DCMA
KE_LIBRARY_OUTPUT_DIRECTORY=/tmp/tmpwkyahi43.build-lib/vllm', '-DCMAKE_ARCHIVE_OUTPUT_DIRECTORY=/tmp/tmpr4g0c3x5.build-temp', '-DVLLM_TARGET_DEVICE=cuda', '-D
VLLM_PYTHON_EXECUTABLE=/workspace/xuyifan/miniconda3/envs/llama-0924/bin/python', '-DVLLM_PYTHON_PATH=/workspace/xuyifan/miniconda3/envs/llama-0924/lib/python
3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process:/tmp/pip-build-env-31q89vzo/site:/workspace/xuyifan/miniconda3/envs/llama-0924/lib/python311.zip:/w
orkspace/xuyifan/miniconda3/envs/llama-0924/lib/python3.11:/workspace/xuyifan/miniconda3/envs/llama-0924/lib/python3.11/lib-dynload:/tmp/pip-build-env-31q89vz
o/overlay/lib/python3.11/site-packages:/tmp/pip-build-env-31q89vzo/normal/lib/python3.11/site-packages:/tmp/pip-build-env-31q89vzo/overlay/lib/python3.11/site
-packages/setuptools/_vendor', '-DNVCC_THREADS=1', '-DCMAKE_JOB_POOL_COMPILE:STRING=compile', '-DCMAKE_JOB_POOLS:STRING=compile=172']' returned non-zero exit 
status 1.                                                                                                                                                     
      [end of output]                                                                                                                                         

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building editable for vllm
Failed to build vllm
ERROR: ERROR: Failed to build installable wheels for some pyproject.toml based projects (vllm)

请问有什么解决办法吗

作为补充,以下是我的机器信息:

nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2023 NVIDIA Corporation
Built on Fri_Sep__8_19:17:24_PDT_2023
Cuda compilation tools, release 12.3, V12.3.52
Build cuda_12.3.r12.3/compiler.33281558_0

uname -a
Linux platform 5.4.119-19.0009.28 #1 SMP Thu May 18 10:39:16 CST 2023 x86_64 x86_64 x86_64 GNU/Linux
Y-J-Zhang commented 2 days ago

+1,请问找到解决办法了么

xuyifan-0731 commented 2 days ago

+1,请问找到解决办法了么

我发现好像是训练的时候没有用正确的环境,能够进行训练但是tokenizer和config都有点不一样,所以读取不了,换正确安装transformers的环境训练之后就没问题了

wolfibmmbi commented 1 day ago

+1,一摸一样的的报错。是不是vllm版本的问题。但是看vllm0.6.2已经支持qwen2vl了呀

Littlecowherd commented 14 hours ago

+1,一摸一样的的报错。是不是vllm版本的问题。但是看vllm0.6.2已经支持qwen2vl了呀

从源码安装最新的 vLLM 试试 https://docs.vllm.ai/en/latest/getting_started/installation.html#install-the-latest-code

Tongjilibo commented 11 hours ago

把模型文件目录下config.json中的type修改为rope_type可以规避这个错误