QwenLM / Qwen2-VL

Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
Apache License 2.0
1.88k stars 108 forks source link

vllm部署qwen2-vl-7b的时候启动错误 #35

Open Yao990x16 opened 2 weeks ago

Yao990x16 commented 2 weeks ago

硬件环境:4090+i9-14900f 操作系统:ubuntu 22.04 环境:python 3.8,vllm 0.5.5,vllm-flash-attn 2.6.1,transformers 4.45.0.dev0 问题描述:使用conda创建python3.8的环境后使用pip安装了vllm,模型权重文件也下载到了本地。然后执行启动命令”python -m vllm.entrypoints.openai.api_server --model /home/qwen2-vl-7b/qwen-vl-7b-hf --served-model-name Qwen2-VL-7B-Instruct“后出现报错。 报错信息:INFO 08-30 17:20:43 api_server.py:440] vLLM API server version 0.5.5 INFO 08-30 17:20:43 api_server.py:441] args: Namespace(allow_credentials=False, allowed_headers=[''], allowed_methods=[''], allowed_origins=['*'], api_key=None, block_size=16, chat_template=None, code_revision=None, collect_detailed_traces=None, cpu_offload_gb=0, device='auto', disable_custom_all_reduce=False, disable_frontend_multiprocessing=False, disable_log_requests=False, disable_log_stats=False, disable_logprobs_during_spec_decoding=None, disable_sliding_window=False, distributed_executor_backend=None, download_dir=None, dtype='auto', enable_chunked_prefill=None, enable_lora=False, enable_prefix_caching=False, enable_prompt_adapter=False, enforce_eager=False, engine_use_ray=False, fully_sharded_loras=False, gpu_memory_utilization=0.9, guided_decoding_backend='outlines', host=None, ignore_patterns=[], kv_cache_dtype='auto', limit_mm_per_prompt=None, load_format='auto', long_lora_scaling_factors=None, lora_dtype='auto', lora_extra_vocab_size=256, lora_modules=None, max_context_len_to_capture=None, max_cpu_loras=None, max_log_len=None, max_logprobs=20, max_lora_rank=16, max_loras=1, max_model_len=None, max_num_batched_tokens=None, max_num_seqs=256, max_parallel_loading_workers=None, max_prompt_adapter_token=0, max_prompt_adapters=1, max_seq_len_to_capture=8192, middleware=[], model='/home/qwen2-vl-7b/qwen-vl-7b-hf', model_loader_extra_config=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, num_gpu_blocks_override=None, num_lookahead_slots=0, num_scheduler_steps=1, num_speculative_tokens=None, otlp_traces_endpoint=None, pipeline_parallel_size=1, port=8000, preemption_mode=None, prompt_adapters=None, qlora_adapter_name_or_path=None, quantization=None, quantization_param_path=None, ray_workers_use_nsight=False, response_role='assistant', return_tokens_as_token_ids=False, revision=None, root_path=None, rope_scaling=None, rope_theta=None, scheduler_delay_factor=0.0, seed=0, served_model_name=['Qwen2-VL-7B-Instruct'], skip_tokenizer_init=False, spec_decoding_acceptance_method='rejection_sampler', speculative_disable_by_batch_size=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_model=None, speculative_model_quantization=None, ssl_ca_certs=None, ssl_cert_reqs=0, ssl_certfile=None, ssl_keyfile=None, swap_space=4, tensor_parallel_size=1, tokenizer=None, tokenizer_mode='auto', tokenizer_pool_extra_config=None, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_revision=None, trust_remote_code=False, typical_acceptance_sampler_posterior_alpha=None, typical_acceptance_sampler_posterior_threshold=None, use_v2_block_manager=False, uvicorn_log_level='info', worker_use_ray=False) Traceback (most recent call last): File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/site-packages/vllm/entrypoints/openai/api_server.py", line 476, in asyncio.run(run_server(args)) File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/site-packages/vllm/entrypoints/openai/api_server.py", line 443, in run_server async with build_async_engine_client(args) as async_engine_client: File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/contextlib.py", line 171, in aenter return await self.gen.anext() File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/site-packages/vllm/entrypoints/openai/api_server.py", line 117, in build_async_engine_client if (model_is_embedding(args.model, args.trust_remote_code, File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/site-packages/vllm/entrypoints/openai/api_server.py", line 71, in model_is_embedding return ModelConfig(model=model_name, File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/site-packages/vllm/config.py", line 214, in init self.max_model_len = _get_and_verify_max_len( File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.8/site-packages/vllm/config.py", line 1650, in _get_and_verify_max_len assert "factor" in rope_scaling AssertionError

fyabc commented 2 weeks ago

@Yao990x16 您好,请使用源码安装我们fork的vllm版本,从pip安装的官方版本暂时还不支持Qwen2-VL(我们正在推进合并进官方仓库)

此外,您还可以使用我们提供的docker镜像,可参考此处

PredyDaddy commented 2 weeks ago

您好,我使用的显卡是4090

您好,我使用了源码安装贵团队的vllm,使用的分支是vllm-add_qwen2_vl_new,具体安装指令是这两条,

export VLLM_TARGET_DEVICE=empty
pip install -e . -i https://pypi.doubanio.com/simple

然后我使用指令,

python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /home/Alvin/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct

然后还是报错

ValueError: The checkpoint you are trying to load has model type `qwen2_vl` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

然后我使用了你们提供的docker镜像, 使用指令启动镜像

docker run --gpus all --ipc=host --network=host --rm --name qwen2 -it qwenllm/qwenvl:2-cu121 bash

在镜像里面使用vllm

python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct

然后报错

python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model Qwen/Qwen2-VL-7B-Instruct
INFO 08-31 08:12:39 api_server.py:440] vLLM API server version 0.5.5
INFO 08-31 08:12:39 api_server.py:441] args: Namespace(allow_credentials=False, allowed_headers=['*'], allowed_methods=['*'], allowed_origins=['*'], api_key=None, block_size=16, chat_template=None, code_revision=None, collect_detailed_traces=None, cpu_offload_gb=0, device='auto', disable_async_output_proc=False, disable_custom_all_reduce=False, disable_frontend_multiprocessing=False, disable_log_requests=False, disable_log_stats=False, disable_logprobs_during_spec_decoding=None, disable_sliding_window=False, distributed_executor_backend=None, download_dir=None, dtype='auto', enable_chunked_prefill=None, enable_lora=False, enable_prefix_caching=False, enable_prompt_adapter=False, enforce_eager=False, engine_use_ray=False, fully_sharded_loras=False, gpu_memory_utilization=0.9, guided_decoding_backend='outlines', host=None, ignore_patterns=[], kv_cache_dtype='auto', limit_mm_per_prompt=None, load_format='auto', long_lora_scaling_factors=None, lora_dtype='auto', lora_extra_vocab_size=256, lora_modules=None, max_context_len_to_capture=None, max_cpu_loras=None, max_log_len=None, max_logprobs=20, max_lora_rank=16, max_loras=1, max_model_len=None, max_num_batched_tokens=None, max_num_seqs=256, max_parallel_loading_workers=None, max_prompt_adapter_token=0, max_prompt_adapters=1, max_seq_len_to_capture=8192, middleware=[], model='Qwen/Qwen2-VL-7B-Instruct', model_loader_extra_config=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, num_gpu_blocks_override=None, num_lookahead_slots=0, num_scheduler_steps=1, num_speculative_tokens=None, otlp_traces_endpoint=None, pipeline_parallel_size=1, port=8000, preemption_mode=None, prompt_adapters=None, qlora_adapter_name_or_path=None, quantization=None, quantization_param_path=None, ray_workers_use_nsight=False, response_role='assistant', return_tokens_as_token_ids=False, revision=None, root_path=None, rope_scaling=None, rope_theta=None, scheduler_delay_factor=0.0, seed=0, served_model_name=['Qwen2-VL-7B-Instruct'], skip_tokenizer_init=False, spec_decoding_acceptance_method='rejection_sampler', speculative_disable_by_batch_size=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_model=None, speculative_model_quantization=None, ssl_ca_certs=None, ssl_cert_reqs=0, ssl_certfile=None, ssl_keyfile=None, swap_space=4, tensor_parallel_size=1, tokenizer=None, tokenizer_mode='auto', tokenizer_pool_extra_config=None, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_revision=None, trust_remote_code=False, typical_acceptance_sampler_posterior_alpha=None, typical_acceptance_sampler_posterior_threshold=None, use_v2_block_manager=False, uvicorn_log_level='info', worker_use_ray=False)
config.json: 100%|██████████████████████████████████████████████████████████████████| 1.20k/1.20k [00:00<00:00, 133kB/s]
preprocessor_config.json: 100%|████████████████████████████████████████████████████████| 347/347 [00:00<00:00, 38.2kB/s]
INFO 08-31 08:12:44 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/714ec436-3ca3-4e5a-ac83-91bf67223a54 for RPC Path.
INFO 08-31 08:12:44 api_server.py:161] Started engine process with PID 61
INFO 08-31 08:12:48 llm_engine.py:212] Initializing an LLM engine (v0.5.5) with config: model='Qwen/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-7B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
tokenizer_config.json: 100%|████████████████████████████████████████████████████████| 4.19k/4.19k [00:00<00:00, 388kB/s]
vocab.json: 100%|██████████████████████████████████████████████████████████████████| 2.78M/2.78M [00:01<00:00, 1.46MB/s]
merges.txt: 100%|██████████████████████████████████████████████████████████████████| 1.67M/1.67M [00:00<00:00, 3.58MB/s]
tokenizer.json: 100%|██████████████████████████████████████████████████████████████| 7.03M/7.03M [00:00<00:00, 8.52MB/s]
generation_config.json: 100%|███████████████████████████████████████████████████████████| 244/244 [00:00<00:00, 145kB/s]
INFO 08-31 08:12:57 model_runner.py:991] Starting to load model Qwen/Qwen2-VL-7B-Instruct...
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 750, in from_engine_args
    engine = cls(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 641, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 850, in _init_engine
    return engine_class(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 302, in __init__
    self.model_executor = executor_class(
  File "/usr/local/lib/python3.8/dist-packages/vllm/executor/executor_base.py", line 47, in __init__
    self._init_executor()
  File "/usr/local/lib/python3.8/dist-packages/vllm/executor/gpu_executor.py", line 40, in _init_executor
    self.driver_worker.load_model()
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 182, in load_model
    self.model_runner.load_model()
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/model_runner.py", line 993, in load_model
    self.model = get_model(model_config=self.model_config,
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/model_loader/__init__.py", line 19, in get_model
    return loader.load_model(model_config=model_config,
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/model_loader/loader.py", line 341, in load_model
    model = _initialize_model(model_config, self.load_config,
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/model_loader/loader.py", line 168, in _initialize_model
    model_class, _ = get_model_architecture(model_config)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/model_loader/utils.py", line 31, in get_model_architecture
    return ModelRegistry.resolve_model_cls(architectures)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/__init__.py", line 167, in resolve_model_cls
    model_cls = ModelRegistry._try_load_model_cls(arch)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/__init__.py", line 161, in _try_load_model_cls
    return ModelRegistry._get_model(model_arch)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/__init__.py", line 141, in _get_model
    module = importlib.import_module(
  File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 848, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 573, in <module>
    ctx: InputContext, seq_len: int, mm_counts: Mapping[str, int]
TypeError: 'ABCMeta' object is not subscriptable
ERROR 08-31 08:12:59 api_server.py:171] RPCServer process died before responding to readiness probe

可以帮我看看吗?

fyabc commented 2 weeks ago

@PredyDaddy 您好,使用源码安装时的报错应当是transformers版本问题导致的,需要源码安装包含该PR的transformers版本。 此外,docker环境中的报错来自vllm的bug,此问题已修复,请拉取更新后的镜像版本。

PredyDaddy commented 2 weeks ago

@PredyDaddy 您好,使用源码安装时的报错应当是transformers版本问题导致的,需要源码安装包含该PR的transformers版本。 此外,docker环境中的报错来自vllm的bug,此问题已修复,请拉取更新后的镜像版本。

您好,可能再麻烦您帮我看一下我哪里没做对,我现在尝试用了这个版本的transformers,是4.45.0.dev0,然后可以load模型了,然后会出现下面的问题

AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick'

具体详细的日志是

python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct
WARNING 09-02 10:43:41 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/data/workspace/vllm-add_qwen2_vl_new/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 09-02 10:43:42 api_server.py:440] vLLM API server version 0.5.5
INFO 09-02 10:43:42 api_server.py:441] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, model='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['Qwen2-VL-7B-Instruct'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 09-02 10:43:42 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/f44ffc49-7d4f-420c-89bf-956b941d41cf for RPC Path.
INFO 09-02 10:43:42 api_server.py:161] Started engine process with PID 799764
WARNING 09-02 10:43:43 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/data/workspace/vllm-add_qwen2_vl_new/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 09-02 10:43:44 llm_engine.py:212] Initializing an LLM engine (v0.5.5) with config: model='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-7B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-02 10:43:44 model_runner.py:991] Starting to load model /home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct...
Loading safetensors checkpoint shards:   0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  20% Completed | 1/5 [00:00<00:02,  1.70it/s]
Loading safetensors checkpoint shards:  40% Completed | 2/5 [00:02<00:04,  1.58s/it]
Loading safetensors checkpoint shards:  60% Completed | 3/5 [00:07<00:06,  3.00s/it]
Loading safetensors checkpoint shards:  80% Completed | 4/5 [00:07<00:01,  1.91s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:08<00:00,  1.44s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:08<00:00,  1.68s/it]

INFO 09-02 10:43:53 model_runner.py:1002] Loading model weights took 15.7193 GB
ERROR 09-02 10:43:54 _custom_ops.py:37] Error in calling custom op gelu_quick: '_OpNamespace' '_C' object has no attribute 'gelu_quick'
ERROR 09-02 10:43:54 _custom_ops.py:37] Possibly you have built or installed an obsolete version of vllm.
ERROR 09-02 10:43:54 _custom_ops.py:37] Please try a clean build and install of vllm,or remove old built files such as vllm/*cpython*.so and build/ .
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 750, in from_engine_args
    engine = cls(
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 641, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 850, in _init_engine
    return engine_class(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/llm_engine.py", line 316, in __init__
    self._initialize_kv_caches()
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/llm_engine.py", line 451, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
    return self.driver_worker.determine_num_available_blocks()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/worker.py", line 222, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/model_runner.py", line 1209, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/model_runner.py", line 1536, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 820, in forward
    image_embeds = self._process_image_input(image_input)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 753, in _process_image_input
    image_embeds = self.visual(pixel_values,
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 480, in forward
    x = blk(x, cu_seqlens=cu_seqlens, rotary_pos_emb=rotary_pos_emb)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 276, in forward
    x = x + self.mlp(self.norm2(x))
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 123, in forward
    x_parallel = self.act(x_parallel)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/custom_op.py", line 14, in forward
    return self._forward_method(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/layers/activation.py", line 155, in forward_cuda
    ops.gelu_quick(out, x)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 38, in wrapper
    raise e
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 29, in wrapper
    return fn(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 65, in gelu_quick
    torch.ops._C.gelu_quick(out, x)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/_ops.py", line 1170, in __getattr__
    raise AttributeError(
AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick'
ERROR 09-02 10:43:57 api_server.py:171] RPCServer process died before responding to readiness probe
PredyDaddy commented 2 weeks ago

@PredyDaddy 您好,使用源码安装时的报错应当是transformers版本问题导致的,需要源码安装包含该PR的transformers版本。 此外,docker环境中的报错来自vllm的bug,此问题已修复,请拉取更新后的镜像版本。

然后我拉取了最新的镜像,然后在镜像里面执行指令

python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model Qwen/Qwen2-VL-7B-Instruct

然后出现了OOM,我使用的是4090的显卡,感觉应该是够显存的,具体的日志如下

root@len1-System-Product-Name:~# python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model Qwen/Qwen2-VL-7B-Instruct
INFO 09-02 03:33:30 api_server.py:440] vLLM API server version 0.5.5
INFO 09-02 03:33:30 api_server.py:441] args: Namespace(allow_credentials=False, allowed_headers=['*'], allowed_methods=['*'], allowed_origins=['*'], api_key=None, block_size=16, chat_template=None, code_revision=None, collect_detailed_traces=None, cpu_offload_gb=0, device='auto', disable_async_output_proc=False, disable_custom_all_reduce=False, disable_frontend_multiprocessing=False, disable_log_requests=False, disable_log_stats=False, disable_logprobs_during_spec_decoding=None, disable_sliding_window=False, distributed_executor_backend=None, download_dir=None, dtype='auto', enable_chunked_prefill=None, enable_lora=False, enable_prefix_caching=False, enable_prompt_adapter=False, enforce_eager=False, engine_use_ray=False, fully_sharded_loras=False, gpu_memory_utilization=0.9, guided_decoding_backend='outlines', host=None, ignore_patterns=[], kv_cache_dtype='auto', limit_mm_per_prompt=None, load_format='auto', long_lora_scaling_factors=None, lora_dtype='auto', lora_extra_vocab_size=256, lora_modules=None, max_context_len_to_capture=None, max_cpu_loras=None, max_log_len=None, max_logprobs=20, max_lora_rank=16, max_loras=1, max_model_len=None, max_num_batched_tokens=None, max_num_seqs=256, max_parallel_loading_workers=None, max_prompt_adapter_token=0, max_prompt_adapters=1, max_seq_len_to_capture=8192, middleware=[], model='Qwen/Qwen2-VL-7B-Instruct', model_loader_extra_config=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, num_gpu_blocks_override=None, num_lookahead_slots=0, num_scheduler_steps=1, num_speculative_tokens=None, otlp_traces_endpoint=None, pipeline_parallel_size=1, port=8000, preemption_mode=None, prompt_adapters=None, qlora_adapter_name_or_path=None, quantization=None, quantization_param_path=None, ray_workers_use_nsight=False, response_role='assistant', return_tokens_as_token_ids=False, revision=None, root_path=None, rope_scaling=None, rope_theta=None, scheduler_delay_factor=0.0, seed=0, served_model_name=['Qwen2-VL-7B-Instruct'], skip_tokenizer_init=False, spec_decoding_acceptance_method='rejection_sampler', speculative_disable_by_batch_size=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_model=None, speculative_model_quantization=None, ssl_ca_certs=None, ssl_cert_reqs=0, ssl_certfile=None, ssl_keyfile=None, swap_space=4, tensor_parallel_size=1, tokenizer=None, tokenizer_mode='auto', tokenizer_pool_extra_config=None, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_revision=None, trust_remote_code=False, typical_acceptance_sampler_posterior_alpha=None, typical_acceptance_sampler_posterior_threshold=None, use_v2_block_manager=False, uvicorn_log_level='info', worker_use_ray=False)
INFO 09-02 03:33:33 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/35e18831-a9bf-45f2-9bb7-8415def9b36e for RPC Path.
INFO 09-02 03:33:33 api_server.py:161] Started engine process with PID 26614
INFO 09-02 03:33:37 llm_engine.py:212] Initializing an LLM engine (v0.5.5) with config: model='Qwen/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-7B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-02 03:33:39 model_runner.py:991] Starting to load model Qwen/Qwen2-VL-7B-Instruct...
INFO 09-02 03:33:41 weight_utils.py:236] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  20% Completed | 1/5 [00:12<00:49, 12.35s/it]
Loading safetensors checkpoint shards:  40% Completed | 2/5 [00:18<00:26,  8.70s/it]
Loading safetensors checkpoint shards:  60% Completed | 3/5 [00:23<00:14,  7.20s/it]
Loading safetensors checkpoint shards:  80% Completed | 4/5 [00:37<00:09,  9.64s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:50<00:00, 10.76s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:50<00:00, 10.01s/it]

INFO 09-02 03:34:32 model_runner.py:1002] Loading model weights took 15.7193 GB
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 750, in from_engine_args
    engine = cls(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 641, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 850, in _init_engine
    return engine_class(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 316, in __init__
    self._initialize_kv_caches()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 451, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
  File "/usr/local/lib/python3.8/dist-packages/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
    return self.driver_worker.determine_num_available_blocks()
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 222, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/model_runner.py", line 1209, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/model_runner.py", line 1536, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 839, in forward
    hidden_states = self.model(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 277, in forward
    hidden_states, residual = layer(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 220, in forward
    hidden_states = self.mlp(hidden_states)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 80, in forward
    x = self.act_fn(gate_up)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/custom_op.py", line 14, in forward
    return self._forward_method(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/layers/activation.py", line 36, in forward_cuda
    out = torch.empty(output_shape, dtype=x.dtype, device=x.device)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.16 GiB. GPU 0 has a total capacity of 23.63 GiB of which 1.21 GiB is free. Process 5185 has 60.47 MiB memory in use. Process 858848 has 21.33 GiB memory in use. Of the allocated memory 19.36 GiB is allocated by PyTorch, and 1.52 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ERROR 09-02 03:34:43 api_server.py:171] RPCServer process died before responding to readiness probe
root@len1-System-Product-Name:~#  python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model Qwen/Qwen2-VL-7B-Instruct --tensor_parallel_size 1 --pipeline_parallel_size 1
INFO 09-02 03:36:40 api_server.py:440] vLLM API server version 0.5.5
INFO 09-02 03:36:40 api_server.py:441] args: Namespace(allow_credentials=False, allowed_headers=['*'], allowed_methods=['*'], allowed_origins=['*'], api_key=None, block_size=16, chat_template=None, code_revision=None, collect_detailed_traces=None, cpu_offload_gb=0, device='auto', disable_async_output_proc=False, disable_custom_all_reduce=False, disable_frontend_multiprocessing=False, disable_log_requests=False, disable_log_stats=False, disable_logprobs_during_spec_decoding=None, disable_sliding_window=False, distributed_executor_backend=None, download_dir=None, dtype='auto', enable_chunked_prefill=None, enable_lora=False, enable_prefix_caching=False, enable_prompt_adapter=False, enforce_eager=False, engine_use_ray=False, fully_sharded_loras=False, gpu_memory_utilization=0.9, guided_decoding_backend='outlines', host=None, ignore_patterns=[], kv_cache_dtype='auto', limit_mm_per_prompt=None, load_format='auto', long_lora_scaling_factors=None, lora_dtype='auto', lora_extra_vocab_size=256, lora_modules=None, max_context_len_to_capture=None, max_cpu_loras=None, max_log_len=None, max_logprobs=20, max_lora_rank=16, max_loras=1, max_model_len=None, max_num_batched_tokens=None, max_num_seqs=256, max_parallel_loading_workers=None, max_prompt_adapter_token=0, max_prompt_adapters=1, max_seq_len_to_capture=8192, middleware=[], model='Qwen/Qwen2-VL-7B-Instruct', model_loader_extra_config=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, num_gpu_blocks_override=None, num_lookahead_slots=0, num_scheduler_steps=1, num_speculative_tokens=None, otlp_traces_endpoint=None, pipeline_parallel_size=1, port=8000, preemption_mode=None, prompt_adapters=None, qlora_adapter_name_or_path=None, quantization=None, quantization_param_path=None, ray_workers_use_nsight=False, response_role='assistant', return_tokens_as_token_ids=False, revision=None, root_path=None, rope_scaling=None, rope_theta=None, scheduler_delay_factor=0.0, seed=0, served_model_name=['Qwen2-VL-7B-Instruct'], skip_tokenizer_init=False, spec_decoding_acceptance_method='rejection_sampler', speculative_disable_by_batch_size=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_model=None, speculative_model_quantization=None, ssl_ca_certs=None, ssl_cert_reqs=0, ssl_certfile=None, ssl_keyfile=None, swap_space=4, tensor_parallel_size=1, tokenizer=None, tokenizer_mode='auto', tokenizer_pool_extra_config=None, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_revision=None, trust_remote_code=False, typical_acceptance_sampler_posterior_alpha=None, typical_acceptance_sampler_posterior_threshold=None, use_v2_block_manager=False, uvicorn_log_level='info', worker_use_ray=False)
INFO 09-02 03:36:43 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/df4cfda7-70d7-48a3-a79f-7ba006597044 for RPC Path.
INFO 09-02 03:36:43 api_server.py:161] Started engine process with PID 28959
INFO 09-02 03:36:47 llm_engine.py:212] Initializing an LLM engine (v0.5.5) with config: model='Qwen/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-7B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-02 03:36:48 model_runner.py:991] Starting to load model Qwen/Qwen2-VL-7B-Instruct...
INFO 09-02 03:36:49 weight_utils.py:236] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  20% Completed | 1/5 [00:05<00:20,  5.19s/it]
Loading safetensors checkpoint shards:  40% Completed | 2/5 [00:07<00:10,  3.48s/it]
Loading safetensors checkpoint shards:  60% Completed | 3/5 [00:07<00:04,  2.01s/it]
Loading safetensors checkpoint shards:  80% Completed | 4/5 [00:08<00:01,  1.46s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:09<00:00,  1.18s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:09<00:00,  1.81s/it]

INFO 09-02 03:36:59 model_runner.py:1002] Loading model weights took 15.7193 GB
INFO 09-02 03:37:07 gpu_executor.py:122] # GPU blocks: 0, # CPU blocks: 4681
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 750, in from_engine_args
    engine = cls(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 641, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 850, in _init_engine
    return engine_class(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 316, in __init__
    self._initialize_kv_caches()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 464, in _initialize_kv_caches
    self.model_executor.initialize_cache(num_gpu_blocks, num_cpu_blocks)
  File "/usr/local/lib/python3.8/dist-packages/vllm/executor/gpu_executor.py", line 125, in initialize_cache
    self.driver_worker.initialize_cache(num_gpu_blocks, num_cpu_blocks)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 257, in initialize_cache
    raise_if_cache_size_invalid(num_gpu_blocks,
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 471, in raise_if_cache_size_invalid
    raise ValueError("No available memory for the cache blocks. "
ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.
ERROR 09-02 03:37:08 api_server.py:171] RPCServer process died before responding to readiness probe
Yao990x16 commented 1 week ago

@fyabc 我使用该分支 https://github.com/fyabc/vllm 编译完成后,启动的时候报错: Traceback (most recent call last): File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/api_server.py", line 42, in from vllm.entrypoints.openai.rpc.server import run_rpc_server File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/rpc/server.py", line 14, in from vllm import AsyncEngineArgs, AsyncLLMEngine ImportError: cannot import name 'AsyncEngineArgs' from 'vllm' (unknown location) 编译环境:gcc 11.4.4,cmake 3.26.4,cuda 12.1.105,python 3.10.14

fyabc commented 1 week ago

@PredyDaddy 您好,使用源码安装时的报错应当是transformers版本问题导致的,需要源码安装包含该PR的transformers版本。 此外,docker环境中的报错来自vllm的bug,此问题已修复,请拉取更新后的镜像版本。

您好,可能再麻烦您帮我看一下我哪里没做对,我现在尝试用了这个版本的transformers,是4.45.0.dev0,然后可以load模型了,然后会出现下面的问题

AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick'

具体详细的日志是

python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct
WARNING 09-02 10:43:41 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/data/workspace/vllm-add_qwen2_vl_new/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 09-02 10:43:42 api_server.py:440] vLLM API server version 0.5.5
INFO 09-02 10:43:42 api_server.py:441] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, model='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['Qwen2-VL-7B-Instruct'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 09-02 10:43:42 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/f44ffc49-7d4f-420c-89bf-956b941d41cf for RPC Path.
INFO 09-02 10:43:42 api_server.py:161] Started engine process with PID 799764
WARNING 09-02 10:43:43 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/data/workspace/vllm-add_qwen2_vl_new/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 09-02 10:43:44 llm_engine.py:212] Initializing an LLM engine (v0.5.5) with config: model='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-7B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-02 10:43:44 model_runner.py:991] Starting to load model /home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct...
Loading safetensors checkpoint shards:   0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  20% Completed | 1/5 [00:00<00:02,  1.70it/s]
Loading safetensors checkpoint shards:  40% Completed | 2/5 [00:02<00:04,  1.58s/it]
Loading safetensors checkpoint shards:  60% Completed | 3/5 [00:07<00:06,  3.00s/it]
Loading safetensors checkpoint shards:  80% Completed | 4/5 [00:07<00:01,  1.91s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:08<00:00,  1.44s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:08<00:00,  1.68s/it]

INFO 09-02 10:43:53 model_runner.py:1002] Loading model weights took 15.7193 GB
ERROR 09-02 10:43:54 _custom_ops.py:37] Error in calling custom op gelu_quick: '_OpNamespace' '_C' object has no attribute 'gelu_quick'
ERROR 09-02 10:43:54 _custom_ops.py:37] Possibly you have built or installed an obsolete version of vllm.
ERROR 09-02 10:43:54 _custom_ops.py:37] Please try a clean build and install of vllm,or remove old built files such as vllm/*cpython*.so and build/ .
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 750, in from_engine_args
    engine = cls(
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 641, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 850, in _init_engine
    return engine_class(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/llm_engine.py", line 316, in __init__
    self._initialize_kv_caches()
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/llm_engine.py", line 451, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
    return self.driver_worker.determine_num_available_blocks()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/worker.py", line 222, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/model_runner.py", line 1209, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/model_runner.py", line 1536, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 820, in forward
    image_embeds = self._process_image_input(image_input)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 753, in _process_image_input
    image_embeds = self.visual(pixel_values,
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 480, in forward
    x = blk(x, cu_seqlens=cu_seqlens, rotary_pos_emb=rotary_pos_emb)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 276, in forward
    x = x + self.mlp(self.norm2(x))
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 123, in forward
    x_parallel = self.act(x_parallel)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/custom_op.py", line 14, in forward
    return self._forward_method(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/layers/activation.py", line 155, in forward_cuda
    ops.gelu_quick(out, x)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 38, in wrapper
    raise e
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 29, in wrapper
    return fn(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 65, in gelu_quick
    torch.ops._C.gelu_quick(out, x)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/_ops.py", line 1170, in __getattr__
    raise AttributeError(
AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick'
ERROR 09-02 10:43:57 api_server.py:171] RPCServer process died before responding to readiness probe

这个错误应该是因为vllm编译失败(Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")),注意编译vllm时,torch版本需要符合requirements中的要求。

fyabc commented 1 week ago

@PredyDaddy 您好,使用源码安装时的报错应当是transformers版本问题导致的,需要源码安装包含该PR的transformers版本。 此外,docker环境中的报错来自vllm的bug,此问题已修复,请拉取更新后的镜像版本。

然后我拉取了最新的镜像,然后在镜像里面执行指令

python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model Qwen/Qwen2-VL-7B-Instruct

然后出现了OOM,我使用的是4090的显卡,感觉应该是够显存的,具体的日志如下

root@len1-System-Product-Name:~# python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model Qwen/Qwen2-VL-7B-Instruct
INFO 09-02 03:33:30 api_server.py:440] vLLM API server version 0.5.5
INFO 09-02 03:33:30 api_server.py:441] args: Namespace(allow_credentials=False, allowed_headers=['*'], allowed_methods=['*'], allowed_origins=['*'], api_key=None, block_size=16, chat_template=None, code_revision=None, collect_detailed_traces=None, cpu_offload_gb=0, device='auto', disable_async_output_proc=False, disable_custom_all_reduce=False, disable_frontend_multiprocessing=False, disable_log_requests=False, disable_log_stats=False, disable_logprobs_during_spec_decoding=None, disable_sliding_window=False, distributed_executor_backend=None, download_dir=None, dtype='auto', enable_chunked_prefill=None, enable_lora=False, enable_prefix_caching=False, enable_prompt_adapter=False, enforce_eager=False, engine_use_ray=False, fully_sharded_loras=False, gpu_memory_utilization=0.9, guided_decoding_backend='outlines', host=None, ignore_patterns=[], kv_cache_dtype='auto', limit_mm_per_prompt=None, load_format='auto', long_lora_scaling_factors=None, lora_dtype='auto', lora_extra_vocab_size=256, lora_modules=None, max_context_len_to_capture=None, max_cpu_loras=None, max_log_len=None, max_logprobs=20, max_lora_rank=16, max_loras=1, max_model_len=None, max_num_batched_tokens=None, max_num_seqs=256, max_parallel_loading_workers=None, max_prompt_adapter_token=0, max_prompt_adapters=1, max_seq_len_to_capture=8192, middleware=[], model='Qwen/Qwen2-VL-7B-Instruct', model_loader_extra_config=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, num_gpu_blocks_override=None, num_lookahead_slots=0, num_scheduler_steps=1, num_speculative_tokens=None, otlp_traces_endpoint=None, pipeline_parallel_size=1, port=8000, preemption_mode=None, prompt_adapters=None, qlora_adapter_name_or_path=None, quantization=None, quantization_param_path=None, ray_workers_use_nsight=False, response_role='assistant', return_tokens_as_token_ids=False, revision=None, root_path=None, rope_scaling=None, rope_theta=None, scheduler_delay_factor=0.0, seed=0, served_model_name=['Qwen2-VL-7B-Instruct'], skip_tokenizer_init=False, spec_decoding_acceptance_method='rejection_sampler', speculative_disable_by_batch_size=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_model=None, speculative_model_quantization=None, ssl_ca_certs=None, ssl_cert_reqs=0, ssl_certfile=None, ssl_keyfile=None, swap_space=4, tensor_parallel_size=1, tokenizer=None, tokenizer_mode='auto', tokenizer_pool_extra_config=None, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_revision=None, trust_remote_code=False, typical_acceptance_sampler_posterior_alpha=None, typical_acceptance_sampler_posterior_threshold=None, use_v2_block_manager=False, uvicorn_log_level='info', worker_use_ray=False)
INFO 09-02 03:33:33 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/35e18831-a9bf-45f2-9bb7-8415def9b36e for RPC Path.
INFO 09-02 03:33:33 api_server.py:161] Started engine process with PID 26614
INFO 09-02 03:33:37 llm_engine.py:212] Initializing an LLM engine (v0.5.5) with config: model='Qwen/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-7B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-02 03:33:39 model_runner.py:991] Starting to load model Qwen/Qwen2-VL-7B-Instruct...
INFO 09-02 03:33:41 weight_utils.py:236] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  20% Completed | 1/5 [00:12<00:49, 12.35s/it]
Loading safetensors checkpoint shards:  40% Completed | 2/5 [00:18<00:26,  8.70s/it]
Loading safetensors checkpoint shards:  60% Completed | 3/5 [00:23<00:14,  7.20s/it]
Loading safetensors checkpoint shards:  80% Completed | 4/5 [00:37<00:09,  9.64s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:50<00:00, 10.76s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:50<00:00, 10.01s/it]

INFO 09-02 03:34:32 model_runner.py:1002] Loading model weights took 15.7193 GB
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 750, in from_engine_args
    engine = cls(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 641, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 850, in _init_engine
    return engine_class(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 316, in __init__
    self._initialize_kv_caches()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 451, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
  File "/usr/local/lib/python3.8/dist-packages/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
    return self.driver_worker.determine_num_available_blocks()
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 222, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/model_runner.py", line 1209, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/model_runner.py", line 1536, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 839, in forward
    hidden_states = self.model(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 277, in forward
    hidden_states, residual = layer(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 220, in forward
    hidden_states = self.mlp(hidden_states)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 80, in forward
    x = self.act_fn(gate_up)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/custom_op.py", line 14, in forward
    return self._forward_method(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/layers/activation.py", line 36, in forward_cuda
    out = torch.empty(output_shape, dtype=x.dtype, device=x.device)
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.16 GiB. GPU 0 has a total capacity of 23.63 GiB of which 1.21 GiB is free. Process 5185 has 60.47 MiB memory in use. Process 858848 has 21.33 GiB memory in use. Of the allocated memory 19.36 GiB is allocated by PyTorch, and 1.52 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
ERROR 09-02 03:34:43 api_server.py:171] RPCServer process died before responding to readiness probe
root@len1-System-Product-Name:~#  python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model Qwen/Qwen2-VL-7B-Instruct --tensor_parallel_size 1 --pipeline_parallel_size 1
INFO 09-02 03:36:40 api_server.py:440] vLLM API server version 0.5.5
INFO 09-02 03:36:40 api_server.py:441] args: Namespace(allow_credentials=False, allowed_headers=['*'], allowed_methods=['*'], allowed_origins=['*'], api_key=None, block_size=16, chat_template=None, code_revision=None, collect_detailed_traces=None, cpu_offload_gb=0, device='auto', disable_async_output_proc=False, disable_custom_all_reduce=False, disable_frontend_multiprocessing=False, disable_log_requests=False, disable_log_stats=False, disable_logprobs_during_spec_decoding=None, disable_sliding_window=False, distributed_executor_backend=None, download_dir=None, dtype='auto', enable_chunked_prefill=None, enable_lora=False, enable_prefix_caching=False, enable_prompt_adapter=False, enforce_eager=False, engine_use_ray=False, fully_sharded_loras=False, gpu_memory_utilization=0.9, guided_decoding_backend='outlines', host=None, ignore_patterns=[], kv_cache_dtype='auto', limit_mm_per_prompt=None, load_format='auto', long_lora_scaling_factors=None, lora_dtype='auto', lora_extra_vocab_size=256, lora_modules=None, max_context_len_to_capture=None, max_cpu_loras=None, max_log_len=None, max_logprobs=20, max_lora_rank=16, max_loras=1, max_model_len=None, max_num_batched_tokens=None, max_num_seqs=256, max_parallel_loading_workers=None, max_prompt_adapter_token=0, max_prompt_adapters=1, max_seq_len_to_capture=8192, middleware=[], model='Qwen/Qwen2-VL-7B-Instruct', model_loader_extra_config=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, num_gpu_blocks_override=None, num_lookahead_slots=0, num_scheduler_steps=1, num_speculative_tokens=None, otlp_traces_endpoint=None, pipeline_parallel_size=1, port=8000, preemption_mode=None, prompt_adapters=None, qlora_adapter_name_or_path=None, quantization=None, quantization_param_path=None, ray_workers_use_nsight=False, response_role='assistant', return_tokens_as_token_ids=False, revision=None, root_path=None, rope_scaling=None, rope_theta=None, scheduler_delay_factor=0.0, seed=0, served_model_name=['Qwen2-VL-7B-Instruct'], skip_tokenizer_init=False, spec_decoding_acceptance_method='rejection_sampler', speculative_disable_by_batch_size=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_model=None, speculative_model_quantization=None, ssl_ca_certs=None, ssl_cert_reqs=0, ssl_certfile=None, ssl_keyfile=None, swap_space=4, tensor_parallel_size=1, tokenizer=None, tokenizer_mode='auto', tokenizer_pool_extra_config=None, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_revision=None, trust_remote_code=False, typical_acceptance_sampler_posterior_alpha=None, typical_acceptance_sampler_posterior_threshold=None, use_v2_block_manager=False, uvicorn_log_level='info', worker_use_ray=False)
INFO 09-02 03:36:43 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/df4cfda7-70d7-48a3-a79f-7ba006597044 for RPC Path.
INFO 09-02 03:36:43 api_server.py:161] Started engine process with PID 28959
INFO 09-02 03:36:47 llm_engine.py:212] Initializing an LLM engine (v0.5.5) with config: model='Qwen/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-7B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-02 03:36:48 model_runner.py:991] Starting to load model Qwen/Qwen2-VL-7B-Instruct...
INFO 09-02 03:36:49 weight_utils.py:236] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  20% Completed | 1/5 [00:05<00:20,  5.19s/it]
Loading safetensors checkpoint shards:  40% Completed | 2/5 [00:07<00:10,  3.48s/it]
Loading safetensors checkpoint shards:  60% Completed | 3/5 [00:07<00:04,  2.01s/it]
Loading safetensors checkpoint shards:  80% Completed | 4/5 [00:08<00:01,  1.46s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:09<00:00,  1.18s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:09<00:00,  1.81s/it]

INFO 09-02 03:36:59 model_runner.py:1002] Loading model weights took 15.7193 GB
INFO 09-02 03:37:07 gpu_executor.py:122] # GPU blocks: 0, # CPU blocks: 4681
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 750, in from_engine_args
    engine = cls(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 641, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 850, in _init_engine
    return engine_class(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 316, in __init__
    self._initialize_kv_caches()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 464, in _initialize_kv_caches
    self.model_executor.initialize_cache(num_gpu_blocks, num_cpu_blocks)
  File "/usr/local/lib/python3.8/dist-packages/vllm/executor/gpu_executor.py", line 125, in initialize_cache
    self.driver_worker.initialize_cache(num_gpu_blocks, num_cpu_blocks)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 257, in initialize_cache
    raise_if_cache_size_invalid(num_gpu_blocks,
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 471, in raise_if_cache_size_invalid
    raise ValueError("No available memory for the cache blocks. "
ValueError: No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.
ERROR 09-02 03:37:08 api_server.py:171] RPCServer process died before responding to readiness probe

目前Qwen2-VL在默认seq_len==32768配置下运行vllm时,对显存要求较高,您可以在启动server时加上--max-model-len 16384,应当就可以在4090上正常运行。

fyabc commented 1 week ago

@fyabc 我使用该分支 https://github.com/fyabc/vllm 编译完成后,启动的时候报错: Traceback (most recent call last): File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/api_server.py", line 42, in from vllm.entrypoints.openai.rpc.server import run_rpc_server File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/rpc/server.py", line 14, in from vllm import AsyncEngineArgs, AsyncLLMEngine ImportError: cannot import name 'AsyncEngineArgs' from 'vllm' (unknown location) 编译环境:gcc 11.4.4,cmake 3.26.4,cuda 12.1.105,python 3.10.14

这个错误可能与此处类似,先检查一下启动时是否有名为vllm的目录或者vllm.py的文件,将它们改名之后再重试。

PredyDaddy commented 1 week ago

For People who don't know how to use vllm infer Qwen2VL, Here is my solution

# enable an images
docker run --gpus all -it --shm-size=64g --privileged --name qwen2vllm --network="host" -v $(pwd):/app qwenllm/qwenvl:latest

# vllm infer Qwen2-VL-7B-Instruct-GPTQ-Int4
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct-GPTQ-Int4 --model path/to/Qwen2-VL-7B-Instruct-GPTQ-Int4

# vllm infer Qwen2-VL-7B-Instruct
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model path/to/Qwen2-VL-7B-Instruct --max-model-len 16384

All tested in nvidia RTX 4090.

Many many thanks!

Yao990x16 commented 1 week ago

@fyabc 我使用该分支 https://github.com/fyabc/vllm 编译完成后,启动的时候报错: Traceback (most recent call last): File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/api_server.py", line 42, in from vllm.entrypoints.openai.rpc.server import run_rpc_server File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/rpc/server.py", line 14, in from vllm import AsyncEngineArgs, AsyncLLMEngine ImportError: cannot import name 'AsyncEngineArgs' from 'vllm' (unknown location) 编译环境:gcc 11.4.4,cmake 3.26.4,cuda 12.1.105,python 3.10.14

这个错误可能与此处类似,先检查一下启动时是否有名为vllm的目录或者vllm.py的文件,将它们改名之后再重试。

您好,这个问题我已经解决了,我在entrypoints/openai/rpc/server.py文件中修改了一下导入路径,然后遇到了和上边一样的问题。部分报错信息如下: WARNING 09-03 13:51:56 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") ERROR 09-03 13:53:24 _custom_ops.py:37] Error in calling custom op gelu_quick: '_OpNamespace' '_C' object has no attribute 'gelu_quick' ERROR 09-03 13:53:24 _custom_ops.py:37] Possibly you have built or installed an obsolete version of vllm. ERROR 09-03 13:53:24 _custom_ops.py:37] Please try a clean build and install of vllm,or remove old built files such as vllm/cpython.so and build/ . AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick' ERROR 09-03 13:53:27 api_server.py:171] RPCServer process died before responding to readiness probe 我看您的回答是vllm编译问题,torch版本需要符合requirements要求。add_qwen2_vl_new分支的编译要求是cmake≥3.26,torch=2.4.0。我都符合要求了。我的编译目录是/home/qwen2-vl-7b/add_qwen2_vl_new/vllm,然后在该目录下使用’pip install -e .‘命令进行编译的。

Yao990x16 commented 1 week ago

@fyabc 我使用该分支 https://github.com/fyabc/vllm 编译完成后,启动的时候报错: Traceback (most recent call last): File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/api_server.py", line 42, in from vllm.entrypoints.openai.rpc.server import run_rpc_server File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/rpc/server.py", line 14, in from vllm import AsyncEngineArgs, AsyncLLMEngine ImportError: cannot import name 'AsyncEngineArgs' from 'vllm' (unknown location) 编译环境:gcc 11.4.4,cmake 3.26.4,cuda 12.1.105,python 3.10.14

这个错误可能与此处类似,先检查一下启动时是否有名为vllm的目录或者vllm.py的文件,将它们改名之后再重试。

您好,这个问题我已经解决了,我在entrypoints/openai/rpc/server.py文件中修改了一下导入路径,然后遇到了和上边一样的问题。部分报错信息如下: WARNING 09-03 13:51:56 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") ERROR 09-03 13:53:24 _custom_ops.py:37] Error in calling custom op gelu_quick: '_OpNamespace' '_C' object has no attribute 'gelu_quick' ERROR 09-03 13:53:24 _custom_ops.py:37] Possibly you have built or installed an obsolete version of vllm. ERROR 09-03 13:53:24 _custom_ops.py:37] Please try a clean build and install of vllm,or remove old built files such as vllm/cpython.so and build/ . AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick' ERROR 09-03 13:53:27 api_server.py:171] RPCServer process died before responding to readiness probe 我看您的回答是vllm编译问题,torch版本需要符合requirements要求。add_qwen2_vl_new分支的编译要求是cmake≥3.26,torch=2.4.0。我都符合要求了。我的编译目录是/home/qwen2-vl-7b/add_qwen2_vl_new/vllm,然后在该目录下使用’pip install -e .‘命令进行编译的。

@PredyDaddy 您好,使用源码安装时的报错应当是transformers版本问题导致的,需要源码安装包含该PR的transformers版本。 此外,docker环境中的报错来自vllm的bug,此问题已修复,请拉取更新后的镜像版本。

您好,可能再麻烦您帮我看一下我哪里没做对,我现在尝试用了这个版本的transformers,是4.45.0.dev0,然后可以load模型了,然后会出现下面的问题

AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick'

具体详细的日志是

python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct
WARNING 09-02 10:43:41 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/data/workspace/vllm-add_qwen2_vl_new/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 09-02 10:43:42 api_server.py:440] vLLM API server version 0.5.5
INFO 09-02 10:43:42 api_server.py:441] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, model='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['Qwen2-VL-7B-Instruct'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 09-02 10:43:42 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/f44ffc49-7d4f-420c-89bf-956b941d41cf for RPC Path.
INFO 09-02 10:43:42 api_server.py:161] Started engine process with PID 799764
WARNING 09-02 10:43:43 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/data/workspace/vllm-add_qwen2_vl_new/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 09-02 10:43:44 llm_engine.py:212] Initializing an LLM engine (v0.5.5) with config: model='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-7B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-02 10:43:44 model_runner.py:991] Starting to load model /home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct...
Loading safetensors checkpoint shards:   0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  20% Completed | 1/5 [00:00<00:02,  1.70it/s]
Loading safetensors checkpoint shards:  40% Completed | 2/5 [00:02<00:04,  1.58s/it]
Loading safetensors checkpoint shards:  60% Completed | 3/5 [00:07<00:06,  3.00s/it]
Loading safetensors checkpoint shards:  80% Completed | 4/5 [00:07<00:01,  1.91s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:08<00:00,  1.44s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:08<00:00,  1.68s/it]

INFO 09-02 10:43:53 model_runner.py:1002] Loading model weights took 15.7193 GB
ERROR 09-02 10:43:54 _custom_ops.py:37] Error in calling custom op gelu_quick: '_OpNamespace' '_C' object has no attribute 'gelu_quick'
ERROR 09-02 10:43:54 _custom_ops.py:37] Possibly you have built or installed an obsolete version of vllm.
ERROR 09-02 10:43:54 _custom_ops.py:37] Please try a clean build and install of vllm,or remove old built files such as vllm/*cpython*.so and build/ .
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 750, in from_engine_args
    engine = cls(
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 641, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 850, in _init_engine
    return engine_class(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/llm_engine.py", line 316, in __init__
    self._initialize_kv_caches()
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/llm_engine.py", line 451, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
    return self.driver_worker.determine_num_available_blocks()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/worker.py", line 222, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/model_runner.py", line 1209, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/model_runner.py", line 1536, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 820, in forward
    image_embeds = self._process_image_input(image_input)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 753, in _process_image_input
    image_embeds = self.visual(pixel_values,
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 480, in forward
    x = blk(x, cu_seqlens=cu_seqlens, rotary_pos_emb=rotary_pos_emb)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 276, in forward
    x = x + self.mlp(self.norm2(x))
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 123, in forward
    x_parallel = self.act(x_parallel)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/custom_op.py", line 14, in forward
    return self._forward_method(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/layers/activation.py", line 155, in forward_cuda
    ops.gelu_quick(out, x)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 38, in wrapper
    raise e
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 29, in wrapper
    return fn(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 65, in gelu_quick
    torch.ops._C.gelu_quick(out, x)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/_ops.py", line 1170, in __getattr__
    raise AttributeError(
AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick'
ERROR 09-02 10:43:57 api_server.py:171] RPCServer process died before responding to readiness probe

这个错误应该是因为vllm编译失败(Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")),注意编译vllm时,torch版本需要符合requirements中的要求。

编译问题使用’pip install -e .‘命令有问题话可以使用cmake和nanja命令手动编译,然后把编译后的模块再手动复制到python的site-packages目录

fyabc commented 1 week ago

@fyabc 我使用该分支 https://github.com/fyabc/vllm 编译完成后,启动的时候报错: Traceback (most recent call last): File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/anaconda3/envs/qwen2-vl-7b/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/api_server.py", line 42, in from vllm.entrypoints.openai.rpc.server import run_rpc_server File "/home/qwen2-vl-7b/vllm/vllm/entrypoints/openai/rpc/server.py", line 14, in from vllm import AsyncEngineArgs, AsyncLLMEngine ImportError: cannot import name 'AsyncEngineArgs' from 'vllm' (unknown location) 编译环境:gcc 11.4.4,cmake 3.26.4,cuda 12.1.105,python 3.10.14

这个错误可能与此处类似,先检查一下启动时是否有名为vllm的目录或者vllm.py的文件,将它们改名之后再重试。

您好,这个问题我已经解决了,我在entrypoints/openai/rpc/server.py文件中修改了一下导入路径,然后遇到了和上边一样的问题。部分报错信息如下: WARNING 09-03 13:51:56 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'") ERROR 09-03 13:53:24 _custom_ops.py:37] Error in calling custom op gelu_quick: '_OpNamespace' '_C' object has no attribute 'gelu_quick' ERROR 09-03 13:53:24 _custom_ops.py:37] Possibly you have built or installed an obsolete version of vllm. ERROR 09-03 13:53:24 _custom_ops.py:37] Please try a clean build and install of vllm,or remove old built files such as vllm/cpython.so and build/ . AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick' ERROR 09-03 13:53:27 api_server.py:171] RPCServer process died before responding to readiness probe 我看您的回答是vllm编译问题,torch版本需要符合requirements要求。add_qwen2_vl_new分支的编译要求是cmake≥3.26,torch=2.4.0。我都符合要求了。我的编译目录是/home/qwen2-vl-7b/add_qwen2_vl_new/vllm,然后在该目录下使用’pip install -e .‘命令进行编译的。

@PredyDaddy 您好,使用源码安装时的报错应当是transformers版本问题导致的,需要源码安装包含该PR的transformers版本。 此外,docker环境中的报错来自vllm的bug,此问题已修复,请拉取更新后的镜像版本。

您好,可能再麻烦您帮我看一下我哪里没做对,我现在尝试用了这个版本的transformers,是4.45.0.dev0,然后可以load模型了,然后会出现下面的问题

AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick'

具体详细的日志是

python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model /home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct
WARNING 09-02 10:43:41 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/data/workspace/vllm-add_qwen2_vl_new/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 09-02 10:43:42 api_server.py:440] vLLM API server version 0.5.5
INFO 09-02 10:43:42 api_server.py:441] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, model='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['Qwen2-VL-7B-Instruct'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 09-02 10:43:42 api_server.py:144] Multiprocessing frontend to use ipc:///tmp/f44ffc49-7d4f-420c-89bf-956b941d41cf for RPC Path.
INFO 09-02 10:43:42 api_server.py:161] Started engine process with PID 799764
WARNING 09-02 10:43:43 _custom_ops.py:18] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
/data/workspace/vllm-add_qwen2_vl_new/vllm/connections.py:8: RuntimeWarning: Failed to read commit hash:
No module named 'vllm.commit_id'
  from vllm.version import __version__ as VLLM_VERSION
INFO 09-02 10:43:44 llm_engine.py:212] Initializing an LLM engine (v0.5.5) with config: model='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='/home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-7B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-02 10:43:44 model_runner.py:991] Starting to load model /home/len1/.cache/modelscope/hub/qwen/Qwen2-VL-7B-Instruct...
Loading safetensors checkpoint shards:   0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  20% Completed | 1/5 [00:00<00:02,  1.70it/s]
Loading safetensors checkpoint shards:  40% Completed | 2/5 [00:02<00:04,  1.58s/it]
Loading safetensors checkpoint shards:  60% Completed | 3/5 [00:07<00:06,  3.00s/it]
Loading safetensors checkpoint shards:  80% Completed | 4/5 [00:07<00:01,  1.91s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:08<00:00,  1.44s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:08<00:00,  1.68s/it]

INFO 09-02 10:43:53 model_runner.py:1002] Loading model weights took 15.7193 GB
ERROR 09-02 10:43:54 _custom_ops.py:37] Error in calling custom op gelu_quick: '_OpNamespace' '_C' object has no attribute 'gelu_quick'
ERROR 09-02 10:43:54 _custom_ops.py:37] Possibly you have built or installed an obsolete version of vllm.
ERROR 09-02 10:43:54 _custom_ops.py:37] Please try a clean build and install of vllm,or remove old built files such as vllm/*cpython*.so and build/ .
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 750, in from_engine_args
    engine = cls(
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 641, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 850, in _init_engine
    return engine_class(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/llm_engine.py", line 316, in __init__
    self._initialize_kv_caches()
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/engine/llm_engine.py", line 451, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
    return self.driver_worker.determine_num_available_blocks()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/worker.py", line 222, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/model_runner.py", line 1209, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/worker/model_runner.py", line 1536, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 820, in forward
    image_embeds = self._process_image_input(image_input)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 753, in _process_image_input
    image_embeds = self.visual(pixel_values,
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 480, in forward
    x = blk(x, cu_seqlens=cu_seqlens, rotary_pos_emb=rotary_pos_emb)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 276, in forward
    x = x + self.mlp(self.norm2(x))
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/models/qwen2_vl.py", line 123, in forward
    x_parallel = self.act(x_parallel)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/custom_op.py", line 14, in forward
    return self._forward_method(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/model_executor/layers/activation.py", line 155, in forward_cuda
    ops.gelu_quick(out, x)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 38, in wrapper
    raise e
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 29, in wrapper
    return fn(*args, **kwargs)
  File "/data/workspace/vllm-add_qwen2_vl_new/vllm/_custom_ops.py", line 65, in gelu_quick
    torch.ops._C.gelu_quick(out, x)
  File "/home/len1/miniconda3/envs/qwen2vllm/lib/python3.10/site-packages/torch/_ops.py", line 1170, in __getattr__
    raise AttributeError(
AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick'
ERROR 09-02 10:43:57 api_server.py:171] RPCServer process died before responding to readiness probe

这个错误应该是因为vllm编译失败(Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")),注意编译vllm时,torch版本需要符合requirements中的要求。

编译问题使用’pip install -e .‘命令有问题话可以使用cmake和nanja命令手动编译,然后把编译后的模块再手动复制到python的site-packages目录

您好,请问现在您的问题重新编译按照vllm后已解决了吗?

pichiu commented 1 week ago

您好 我在V100上使用qwenllm/qwenvl:2-cu121部署qwen2-vl-7b会出现下列错误 RuntimeError: FlashAttention only supports Ampere GPUs or newer.

出错的部分是 https://github.com/vllm-project/vllm/blob/2e87db7e708724110a84586dc916461ee9db09f7/vllm/model_executor/models/qwen2_vl.py#L226

补充:vllm在不能使用FlashAttention时会使用XFormers Cannot use FlashAttention-2 backend for Volta and Turing GPUs. Using XFormers backend.

谢谢

Cherryjingyao commented 1 week ago
raise AttributeError(

AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick' ERROR 09-04 17:12:09 api_server.py:171] RPCServer process died before responding to readiness probe

我也遇到了同样的问题,

fyabc commented 1 week ago

您好 我在V100上使用qwenllm/qwenvl:2-cu121部署qwen2-vl-7b会出现下列错误 RuntimeError: FlashAttention only supports Ampere GPUs or newer.

出错的部分是 https://github.com/vllm-project/vllm/blob/2e87db7e708724110a84586dc916461ee9db09f7/vllm/model_executor/models/qwen2_vl.py#L226

补充:vllm在不能使用FlashAttention时会使用XFormers Cannot use FlashAttention-2 backend for Volta and Turing GPUs. Using XFormers backend.

谢谢

您好,我们正在为Qwen2-VL的vllm实现添加xformers的支持,很快上线,请您稍等。

fyabc commented 1 week ago
raise AttributeError(

AttributeError: '_OpNamespace' '_C' object has no attribute 'gelu_quick' ERROR 09-04 17:12:09 api_server.py:171] RPCServer process died before responding to readiness probe

我也遇到了同样的问题,

如上回复,可以先检查一下torch版本,然后重新编译vllm?

douyh commented 1 week ago

image vllm编译的时候遇到一个这个错误,请问这个可能是什么问题呢?

环境:

image

torch版本'2.4.0+cu121'

fyabc commented 1 week ago

您好 我在V100上使用qwenllm/qwenvl:2-cu121部署qwen2-vl-7b会出现下列错误 RuntimeError: FlashAttention only supports Ampere GPUs or newer.

出错的部分是 https://github.com/vllm-project/vllm/blob/2e87db7e708724110a84586dc916461ee9db09f7/vllm/model_executor/models/qwen2_vl.py#L226

补充:vllm在不能使用FlashAttention时会使用XFormers Cannot use FlashAttention-2 backend for Volta and Turing GPUs. Using XFormers backend.

谢谢

@pichiu 您好,我们已经更新了docker镜像,增加了对xformers的支持,在不支持flash-attn的GPU上可以使用,请拉取最新的镜像。

gujiaqivadin commented 1 week ago

您好 我在V100上使用qwenllm/qwenvl:2-cu121部署qwen2-vl-7b会出现下列错误 RuntimeError: FlashAttention only supports Ampere GPUs or newer. 出错的部分是 https://github.com/vllm-project/vllm/blob/2e87db7e708724110a84586dc916461ee9db09f7/vllm/model_executor/models/qwen2_vl.py#L226 补充:vllm在不能使用FlashAttention时会使用XFormers Cannot use FlashAttention-2 backend for Volta and Turing GPUs. Using XFormers backend. 谢谢

您好,我们正在为Qwen2-VL的vllm实现添加xformers的支持,很快上线,请您稍等。

您好,xformers的版本大概什么时候会ready

fyabc commented 1 week ago

您好 我在V100上使用qwenllm/qwenvl:2-cu121部署qwen2-vl-7b会出现下列错误 RuntimeError: FlashAttention only supports Ampere GPUs or newer. 出错的部分是 https://github.com/vllm-project/vllm/blob/2e87db7e708724110a84586dc916461ee9db09f7/vllm/model_executor/models/qwen2_vl.py#L226 补充:vllm在不能使用FlashAttention时会使用XFormers Cannot use FlashAttention-2 backend for Volta and Turing GPUs. Using XFormers backend. 谢谢

您好,我们正在为Qwen2-VL的vllm实现添加xformers的支持,很快上线,请您稍等。

您好,xformers的版本大概什么时候会ready

@gujiaqivadin 您好,如上回复,我们已经更新了docker镜像,增加了对xformers的支持,在不支持flash-attn的GPU上可以使用,请拉取最新的镜像。

gujiaqivadin commented 1 week ago

您好 我在V100上使用qwenllm/qwenvl:2-cu121部署qwen2-vl-7b会出现下列错误 RuntimeError: FlashAttention only supports Ampere GPUs or newer. 出错的部分是 https://github.com/vllm-project/vllm/blob/2e87db7e708724110a84586dc916461ee9db09f7/vllm/model_executor/models/qwen2_vl.py#L226 补充:vllm在不能使用FlashAttention时会使用XFormers Cannot use FlashAttention-2 backend for Volta and Turing GPUs. Using XFormers backend. 谢谢

您好,我们正在为Qwen2-VL的vllm实现添加xformers的支持,很快上线,请您稍等。

您好,xformers的版本大概什么时候会ready

@gujiaqivadin 您好,如上回复,我们已经更新了docker镜像,增加了对xformers的支持,在不支持flash-attn的GPU上可以使用,请拉取最新的镜像。

CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model weights/Qwen2-VL-7B-Instruct --dtype=half --max-model-len=16384 --tensor_parallel_size=2

您好,我拉去了最新的镜像并按照如上指令通过vllm server部署服务:得到如下报错:RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1

fyabc commented 6 days ago

您好 我在V100上使用qwenllm/qwenvl:2-cu121部署qwen2-vl-7b会出现下列错误 RuntimeError: FlashAttention only supports Ampere GPUs or newer. 出错的部分是 https://github.com/vllm-project/vllm/blob/2e87db7e708724110a84586dc916461ee9db09f7/vllm/model_executor/models/qwen2_vl.py#L226 补充:vllm在不能使用FlashAttention时会使用XFormers Cannot use FlashAttention-2 backend for Volta and Turing GPUs. Using XFormers backend. 谢谢

您好,我们正在为Qwen2-VL的vllm实现添加xformers的支持,很快上线,请您稍等。

您好,xformers的版本大概什么时候会ready

@gujiaqivadin 您好,如上回复,我们已经更新了docker镜像,增加了对xformers的支持,在不支持flash-attn的GPU上可以使用,请拉取最新的镜像。

CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model weights/Qwen2-VL-7B-Instruct --dtype=half --max-model-len=16384 --tensor_parallel_size=2

您好,我拉去了最新的镜像并按照如上指令通过vllm server部署服务:得到如下报错:RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1

您好,qwenllm/qwenvl:2-cu121这个镜像对宿主机驱动版本的要求为>= 530.30.02,请检查下是否满足要求。

pichiu commented 6 days ago

您好 我在V100上使用qwenllm/qwenvl:2-cu121部署qwen2-vl-7b会出现下列错误 RuntimeError: FlashAttention only supports Ampere GPUs or newer. 出错的部分是 https://github.com/vllm-project/vllm/blob/2e87db7e708724110a84586dc916461ee9db09f7/vllm/model_executor/models/qwen2_vl.py#L226 补充:vllm在不能使用FlashAttention时会使用XFormers Cannot use FlashAttention-2 backend for Volta and Turing GPUs. Using XFormers backend. 谢谢

您好,我们正在为Qwen2-VL的vllm实现添加xformers的支持,很快上线,请您稍等。

您好,xformers的版本大概什么时候会ready

@gujiaqivadin 您好,如上回复,我们已经更新了docker镜像,增加了对xformers的支持,在不支持flash-attn的GPU上可以使用,请拉取最新的镜像。

CUDA_VISIBLE_DEVICES=0,1 python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2-VL-7B-Instruct --model weights/Qwen2-VL-7B-Instruct --dtype=half --max-model-len=16384 --tensor_parallel_size=2 您好,我拉去了最新的镜像并按照如上指令通过vllm server部署服务:得到如下报错:RuntimeError: CUDA error: no kernel image is available for execution on the device CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1

您好,qwenllm/qwenvl:2-cu121这个镜像对宿主机驱动版本的要求为>= 530.30.02,请检查下是否满足要求。

您好,驱动版本 535.129.03 镜像 digest 602c8b034e71

指令如下
python3 -m vllm.entrypoints.openai.api_server --served-model-name qwen2-vl-7b --model /models/Qwen2-VL-7B-Instruct --dtype=half --max-model-len=16384 --tensor_parallel_size=1

在V100得到同样报错 RuntimeError: CUDA error: no kernel image is available for execution on the device

fyabc commented 6 days ago

@pichiu 您好,可以加上export CUDA_LAUNCH_BLOCKING=1,然后发一下完整的错误信息吗?

pichiu commented 6 days ago

@pichiu 您好,可以加上export CUDA_LAUNCH_BLOCKING=1,然后发一下完整的错误信息吗?

root@test-vllm-qwen2-vl-0:/data/shared/Qwen# echo $CUDA_LAUNCH_BLOCKING
1
root@test-vllm-qwen2-vl-0:/data/shared/Qwen# python3 -m vllm.entrypoints.openai.api_server --served-model-name qwen2-vl-7b --model /models/Qwen2-VL-7B-Instruct --dtype=half --max-model-len=16384 --tensor_parallel_size=1
INFO 09-09 09:27:51 api_server.py:495] vLLM API server version 0.6.0
INFO 09-09 09:27:51 api_server.py:496] args: Namespace(allow_credentials=False, allowed_headers=['*'], allowed_methods=['*'], allowed_origins=['*'], api_key=None, block_size=16, chat_template=None, code_revision=None, collect_detailed_traces=None, cpu_offload_gb=0, device='auto', disable_async_output_proc=False, disable_custom_all_reduce=False, disable_frontend_multiprocessing=False, disable_log_requests=False, disable_log_stats=False, disable_logprobs_during_spec_decoding=None, disable_sliding_window=False, distributed_executor_backend=None, download_dir=None, dtype='half', enable_auto_tool_choice=False, enable_chunked_prefill=None, enable_lora=False, enable_prefix_caching=False, enable_prompt_adapter=False, enforce_eager=False, engine_use_ray=False, fully_sharded_loras=False, gpu_memory_utilization=0.9, guided_decoding_backend='outlines', host=None, ignore_patterns=[], kv_cache_dtype='auto', limit_mm_per_prompt=None, load_format='auto', long_lora_scaling_factors=None, lora_dtype='auto', lora_extra_vocab_size=256, lora_modules=None, max_context_len_to_capture=None, max_cpu_loras=None, max_log_len=None, max_logprobs=20, max_lora_rank=16, max_loras=1, max_model_len=16384, max_num_batched_tokens=None, max_num_seqs=256, max_parallel_loading_workers=None, max_prompt_adapter_token=0, max_prompt_adapters=1, max_seq_len_to_capture=8192, middleware=[], model='/models/Qwen2-VL-7B-Instruct', model_loader_extra_config=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, num_gpu_blocks_override=None, num_lookahead_slots=0, num_scheduler_steps=1, num_speculative_tokens=None, otlp_traces_endpoint=None, override_neuron_config=None, pipeline_parallel_size=1, port=8000, preemption_mode=None, prompt_adapters=None, qlora_adapter_name_or_path=None, quantization=None, quantization_param_path=None, ray_workers_use_nsight=False, response_role='assistant', return_tokens_as_token_ids=False, revision=None, root_path=None, rope_scaling=None, rope_theta=None, scheduler_delay_factor=0.0, seed=0, served_model_name=['qwen2-vl-7b'], skip_tokenizer_init=False, spec_decoding_acceptance_method='rejection_sampler', speculative_disable_by_batch_size=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_model=None, speculative_model_quantization=None, ssl_ca_certs=None, ssl_cert_reqs=0, ssl_certfile=None, ssl_keyfile=None, swap_space=4, tensor_parallel_size=1, tokenizer=None, tokenizer_mode='auto', tokenizer_pool_extra_config=None, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_revision=None, tool_call_parser=None, trust_remote_code=False, typical_acceptance_sampler_posterior_alpha=None, typical_acceptance_sampler_posterior_threshold=None, use_v2_block_manager=False, uvicorn_log_level='info', worker_use_ray=False)
INFO 09-09 09:27:51 api_server.py:162] Multiprocessing frontend to use ipc:///tmp/70c49d47-b9e0-4921-a6f3-ea37522d9cfa for RPC Path.
INFO 09-09 09:27:51 api_server.py:178] Started engine process with PID 3121
WARNING 09-09 09:27:54 config.py:1653] Casting torch.bfloat16 to torch.float16.
INFO 09-09 09:27:54 llm_engine.py:213] Initializing an LLM engine (v0.6.0) with config: model='/models/Qwen2-VL-7B-Instruct', speculative_config=None, tokenizer='/models/Qwen2-VL-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=16384, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=qwen2-vl-7b, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)
INFO 09-09 09:27:55 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO 09-09 09:27:55 selector.py:116] Using XFormers backend.
/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
  @torch.library.impl_abstract("xformers_flash::flash_fwd")
/usr/local/lib/python3.8/dist-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
  @torch.library.impl_abstract("xformers_flash::flash_bwd")
INFO 09-09 09:27:55 model_runner.py:993] Starting to load model /models/Qwen2-VL-7B-Instruct...
INFO 09-09 09:27:55 selector.py:217] Cannot use FlashAttention-2 backend for Volta and Turing GPUs.
INFO 09-09 09:27:55 selector.py:116] Using XFormers backend.
Loading safetensors checkpoint shards:   0% Completed | 0/5 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  20% Completed | 1/5 [00:01<00:06,  1.64s/it]
Loading safetensors checkpoint shards:  40% Completed | 2/5 [00:02<00:04,  1.42s/it]
Loading safetensors checkpoint shards:  60% Completed | 3/5 [00:04<00:02,  1.36s/it]
Loading safetensors checkpoint shards:  80% Completed | 4/5 [00:04<00:00,  1.02it/s]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:05<00:00,  1.06s/it]
Loading safetensors checkpoint shards: 100% Completed | 5/5 [00:05<00:00,  1.16s/it]

INFO 09-09 09:28:02 model_runner.py:1004] Loading model weights took 15.7193 GB
Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server
    server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path)
  File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 34, in __init__
    self.engine = AsyncLLMEngine.from_engine_args(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 735, in from_engine_args
    engine = cls(
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 615, in __init__
    self.engine = self._init_engine(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 835, in _init_engine
    return engine_class(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 262, in __init__
    super().__init__(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 319, in __init__
    self._initialize_kv_caches()
  File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 448, in _initialize_kv_caches
    self.model_executor.determine_num_available_blocks())
  File "/usr/local/lib/python3.8/dist-packages/vllm/executor/gpu_executor.py", line 114, in determine_num_available_blocks
    return self.driver_worker.determine_num_available_blocks()
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 222, in determine_num_available_blocks
    self.model_runner.profile_run()
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/model_runner.py", line 1211, in profile_run
    self.execute_model(model_input, kv_caches, intermediate_tensors)
  File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/worker/model_runner.py", line 1538, in execute_model
    hidden_or_intermediate_states = model_executable(
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 863, in forward
    image_embeds = self._process_image_input(image_input)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 796, in _process_image_input
    image_embeds = self.visual(pixel_values,
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 532, in forward
    x = blk(x, cu_seqlens=cu_seqlens, rotary_pos_emb=rotary_pos_emb)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
    return forward_call(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 328, in forward
    x = x + self.mlp(self.norm2(x))
RuntimeError: CUDA error: no kernel image is available for execution on the device
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

ERROR 09-09 09:28:06 api_server.py:188] RPCServer process died before responding to readiness probe
fyabc commented 4 days ago

@

image vllm编译的时候遇到一个这个错误,请问这个可能是什么问题呢?

环境:

image

torch版本'2.4.0+cu121'

@douyh 您好,可以s使用pip install . -vvv重新编译一下,然后提供下完整的编译信息吗?

fyabc commented 4 days ago

@gujiaqivadin @pichiu 您好,请参考此处,尝试在不支持flash-attn的GPU上使用我们提供的镜像qwenllm/qwenvl:2-cu121-wo-flashattn