QwenLM / Qwen2-VL

Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
Apache License 2.0
2.46k stars 138 forks source link

运行72B模型,跑不起来 #257

Open V-yw opened 1 week ago

V-yw commented 1 week ago

root@container-715b4abffa-ae32ab74:/data/shared/Qwen/Qwen# export CUDA_VISIBLE_DEVICES=0,1,2 root@container-715b4abffa-ae32ab74:/data/shared/Qwen/Qwen# root@container-715b4abffa-ae32ab74:/data/shared/Qwen/Qwen# root@container-715b4abffa-ae32ab74:/data/shared/Qwen/Qwen# cd /data/shared/Qwen/ && python -m vllm.entrypoints.openai.api_server --host 0.0.0.0 --port 6006 --served-model-name Qwen2-VL-72B-Instruct --model Qwen/Qwen2-VL-72B-Instruct INFO 09-24 01:45:09 api_server.py:495] vLLM API server version 0.6.0 INFO 09-24 01:45:09 api_server.py:496] args: Namespace(allow_credentials=False, allowed_headers=[''], allowed_methods=[''], allowed_origins=[''], api_key=None, block_size=16, chat_template=None, code_revision=None, collect_detailed_traces=None, config_format='auto', cpu_offload_gb=0, device='auto', disable_async_output_proc=False, disable_custom_all_reduce=False, disable_frontend_multiprocessing=False, disable_log_requests=False, disable_log_stats=False, disable_logprobs_during_spec_decoding=None, disable_sliding_window=False, distributed_executor_backend=None, download_dir=None, dtype='auto', enable_auto_tool_choice=False, enable_chunked_prefill=None, enable_lora=False, enable_prefix_caching=False, enable_prompt_adapter=False, enforce_eager=False, engine_use_ray=False, fully_sharded_loras=False, gpu_memory_utilization=0.9, guided_decoding_backend='outlines', host='0.0.0.0', ignore_patterns=[], kv_cache_dtype='auto', limit_mm_per_prompt=None, load_format='auto', long_lora_scaling_factors=None, lora_dtype='auto', lora_extra_vocab_size=256, lora_modules=None, max_context_len_to_capture=None, max_cpu_loras=None, max_log_len=None, max_logprobs=20, max_lora_rank=16, max_loras=1, max_model_len=None, max_num_batched_tokens=None, max_num_seqs=256, max_parallel_loading_workers=None, max_prompt_adapter_token=0, max_prompt_adapters=1, max_seq_len_to_capture=8192, middleware=[], model='Qwen/Qwen2-VL-72B-Instruct', model_loader_extra_config=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, num_gpu_blocks_override=None, num_lookahead_slots=0, num_scheduler_steps=1, num_speculative_tokens=None, otlp_traces_endpoint=None, override_neuron_config=None, pipeline_parallel_size=1, port=6006, preemption_mode=None, prompt_adapters=None, qlora_adapter_name_or_path=None, quantization=None, quantization_param_path=None, ray_workers_use_nsight=False, response_role='assistant', return_tokens_as_token_ids=False, revision=None, root_path=None, rope_scaling=None, rope_theta=None, scheduler_delay_factor=0.0, seed=0, served_model_name=['Qwen2-VL-72B-Instruct'], skip_tokenizer_init=False, spec_decoding_acceptance_method='rejection_sampler', speculative_disable_by_batch_size=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_model=None, speculative_model_quantization=None, ssl_ca_certs=None, ssl_cert_reqs=0, ssl_certfile=None, ssl_keyfile=None, swap_space=4, tensor_parallel_size=1, tokenizer=None, tokenizer_mode='auto', tokenizer_pool_extra_config=None, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_revision=None, tool_call_parser=None, trust_remote_code=False, typical_acceptance_sampler_posterior_alpha=None, typical_acceptance_sampler_posterior_threshold=None, use_v2_block_manager=False, uvicorn_log_level='info', worker_use_ray=False) INFO 09-24 01:45:09 api_server.py:162] Multiprocessing frontend to use ipc:///tmp/f3ba2ea6-82a3-4935-b2e0-390d47be9de9 for RPC Path. INFO 09-24 01:45:09 api_server.py:178] Started engine process with PID 46402 INFO 09-24 01:45:14 llm_engine.py:232] Initializing an LLM engine (v0.6.0) with config: model='Qwen/Qwen2-VL-72B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2-VL-72B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2-VL-72B-Instruct, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True) INFO 09-24 01:45:15 model_runner.py:993] Starting to load model Qwen/Qwen2-VL-72B-Instruct... Process SpawnProcess-1: Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(self._args, self._kwargs) File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path) File "/usr/local/lib/python3.8/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 34, in init self.engine = AsyncLLMEngine.from_engine_args( File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 735, in from_engine_args engine = cls( File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 615, in init self.engine = self._init_engine(*args, *kwargs) File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 835, in _init_engine return engine_class(args, kwargs) File "/usr/local/lib/python3.8/dist-packages/vllm/engine/async_llm_engine.py", line 262, in init super().init(*args, *kwargs) File "/usr/local/lib/python3.8/dist-packages/vllm/engine/llm_engine.py", line 324, in init self.model_executor = executor_class( File "/usr/local/lib/python3.8/dist-packages/vllm/executor/executor_base.py", line 47, in init self._init_executor() File "/usr/local/lib/python3.8/dist-packages/vllm/executor/gpu_executor.py", line 40, in _init_executor self.driver_worker.load_model() File "/usr/local/lib/python3.8/dist-packages/vllm/worker/worker.py", line 182, in load_model self.model_runner.load_model() File "/usr/local/lib/python3.8/dist-packages/vllm/worker/model_runner.py", line 995, in load_model self.model = get_model(model_config=self.model_config, File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/model_loader/init.py", line 19, in get_model return loader.load_model(model_config=model_config, File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/model_loader/loader.py", line 357, in load_model model = _initialize_model(model_config, self.load_config, File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/model_loader/loader.py", line 171, in _initialize_model return build_model( File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/model_loader/loader.py", line 156, in build_model return model_class(config=hf_config, File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2_vl.py", line 726, in init self.model = Qwen2Model(config, cache_config, quant_config) File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 243, in init self.start_layer, self.end_layer, self.layers = make_layers( File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/utils.py", line 248, in makelayers [PPMissingLayer() for in range(start_layer)] + [ File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/utils.py", line 249, in maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 245, in lambda prefix: Qwen2DecoderLayer(config=config, File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 184, in init self.mlp = Qwen2MLP( File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/models/qwen2.py", line 69, in init self.down_proj = RowParallelLinear(intermediate_size, File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/layers/linear.py", line 974, in init self.quant_method.create_weights( File "/usr/local/lib/python3.8/dist-packages/vllm/model_executor/layers/linear.py", line 121, in create_weights weight = Parameter(torch.empty(sum(output_partition_sizes), File "/usr/local/lib/python3.8/dist-packages/torch/utils/_device.py", line 79, in __torch_function__ return func(args, **kwargs) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 462.00 MiB. GPU 0 has a total capacity of 23.65 GiB of which 207.81 MiB is free. Process 1769922 has 23.44 GiB memory in use. Of the allocated memory 22.88 GiB is allocated by PyTorch, and 123.50 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ERROR 09-24 01:45:19 api_server.py:188] RPCServer process died before responding to readiness probe

image

为什么提示我先存不够呢,运行7b 和 2b 都是正常的.

hiker-lw commented 1 week ago

换个大点内存的显卡hhh

RANYABING commented 1 week ago

我4卡A800也跑不起来。 rank0: File "/home/ryb/anaconda3/envs/qwen/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 304, in init

rank0: File "/home/ryb/anaconda3/envs/qwen/lib/python3.10/site-packages/vllm/model_executor/layers/linear.py", line 122, in create_weights rank0: weight = Parameter(torch.empty(sum(output_partition_sizes), rank0: File "/home/ryb/anaconda3/envs/qwen/lib/python3.10/site-packages/torch/utils/_device.py", line 79, in __torch_function__ rank0: return func(*args, **kwargs) rank0: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 924.00 MiB. GPU 0 has a total capacity of 79.15 GiB of which 672.56 MiB is free. Process 1518059 has 14.64 GiB memory in use. Process 1683663 has 5.12 GiB memory in use. Including non-PyTorch memory, this process has 58.72 GiB memory in use. Of the allocated memory 58.12 GiB is allocated by PyTorch, and 124.39 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)