vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.76k stars 3.92k forks source link

[Bug]: Error Running DeepSeek-v2-Lite w/ FP8 #6875

Open Jiayi-Pan opened 1 month ago

Jiayi-Pan commented 1 month ago

Your current environment

Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 24.04 LTS (x86_64)
GCC version: (Ubuntu 13.2.0-23ubuntu4) 13.2.0
Clang version: Could not collect
CMake version: version 3.29.6
Libc version: glibc-2.39

Python version: 3.10.14 (main, May  6 2024, 19:42:50) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-31-generic-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
GPU 2: NVIDIA RTX A6000
GPU 3: NVIDIA RTX A6000
GPU 4: NVIDIA RTX A6000
GPU 5: NVIDIA RTX A6000
GPU 6: NVIDIA RTX A6000
GPU 7: NVIDIA RTX A6000

Nvidia driver version: 555.42.02
cuDNN version: Probably one of the following:
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_adv.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_cnn.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_engines_precompiled.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_engines_runtime_compiled.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_graph.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_heuristic.so.9
/usr/local/cuda-12.5/targets/x86_64-linux/lib/libcudnn_ops.so.9
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        46 bits physical, 57 bits virtual
Byte Order:                           Little Endian
CPU(s):                               128
On-line CPU(s) list:                  0-127
Vendor ID:                            GenuineIntel
Model name:                           Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz
CPU family:                           6
Model:                                106
Thread(s) per core:                   2
Core(s) per socket:                   32
Socket(s):                            2
Stepping:                             6
CPU(s) scaling MHz:                   25%
CPU max MHz:                          3200.0000
CPU min MHz:                          800.0000
BogoMIPS:                             4000.00
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local split_lock_detect wbnoinvd dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid fsrm md_clear pconfig flush_l1d arch_capabilities
Virtualization:                       VT-x
L1d cache:                            3 MiB (64 instances)
L1i cache:                            2 MiB (64 instances)
L2 cache:                             80 MiB (64 instances)
L3 cache:                             96 MiB (2 instances)
NUMA node(s):                         4
NUMA node0 CPU(s):                    0-15,64-79
NUMA node1 CPU(s):                    16-31,80-95
NUMA node2 CPU(s):                    32-47,96-111
NUMA node3 CPU(s):                    48-63,112-127
Vulnerability Gather data sampling:   Vulnerable: No microcode
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Vulnerable
Vulnerability Spectre v1:             Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2:             Vulnerable; IBPB: disabled; STIBP: disabled; PBRSB-eIBRS: Vulnerable; BHI: Vulnerable (Syscall hardening enabled)
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] transformers==4.43.3
[pip3] transformers-stream-generator==0.0.5
[pip3] triton==2.3.1
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] torch                     2.3.1                    pypi_0    pypi
[conda] torchvision               0.18.1                   pypi_0    pypi
[conda] transformers              4.43.3                   pypi_0    pypi
[conda] transformers-stream-generator 0.0.5                    pypi_0    pypi
[conda] triton                    2.3.1                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X  NV4 PXB PXB SYS SYS SYS SYS NODE    0-15,64-79  0       N/A
GPU1    NV4  X  PXB PXB SYS SYS SYS SYS NODE    0-15,64-79  0       N/A
GPU2    PXB PXB  X  NV4 SYS SYS SYS SYS NODE    0-15,64-79  0       N/A
GPU3    PXB PXB NV4  X  SYS SYS SYS SYS NODE    0-15,64-79  0       N/A
GPU4    SYS SYS SYS SYS  X  NV4 PXB PXB SYS 16-31,80-95 1       N/A
GPU5    SYS SYS SYS SYS NV4  X  PXB PXB SYS 16-31,80-95 1       N/A
GPU6    SYS SYS SYS SYS PXB PXB  X  NV4 SYS 16-31,80-95 1       N/A
GPU7    SYS SYS SYS SYS PXB PXB NV4  X  SYS 16-31,80-95 1       N/A
NIC0    NODE    NODE    NODE    NODE    SYS SYS SYS SYS  X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

🐛 Describe the bug

vllm serve neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8 --quantization fp8 --trust-remote-code
INFO 07-28 15:31:33 api_server.py:219] vLLM API server version 0.5.3.post1
INFO 07-28 15:31:33 api_server.py:220] args: Namespace(model_tag='neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8', host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=True, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization='fp8', rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None, dispatch_function=<function serve at 0x71d350966b00>)
WARNING 07-28 15:31:34 arg_utils.py:762] Chunked prefill is enabled by default for models with max_model_len > 32K. Currently, chunked prefill might not work with some features or models. If you encounter any issues, please disable chunked prefill by setting --enable-chunked-prefill=False.
INFO 07-28 15:31:34 config.py:806] Chunked prefill is enabled with max_num_batched_tokens=512.
INFO 07-28 15:31:34 llm_engine.py:176] Initializing an LLM engine (v0.5.3.post1) with config: model='neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8', speculative_config=None, tokenizer='neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=163840, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=fp8, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8, use_v2_block_manager=False, enable_prefix_caching=False)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 07-28 15:31:35 model_runner.py:680] Starting to load model neuralmagic/DeepSeek-Coder-V2-Lite-Instruct-FP8...
WARNING 07-28 15:31:35 fp8.py:39] Detected fp8 checkpoint. Please note that the format is experimental and subject to change.
Cache shape torch.Size([163840, 64])
INFO 07-28 15:31:35 weight_utils.py:223] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]
[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/bin/vllm", line 8, in <module>
[rank0]:     sys.exit(main())
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/scripts.py", line 148, in main
[rank0]:     args.dispatch_function(args)
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/scripts.py", line 28, in serve
[rank0]:     run_server(args)
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/entrypoints/openai/api_server.py", line 231, in run_server
[rank0]:     if llm_engine is not None else AsyncLLMEngine.from_engine_args(
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 466, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 380, in __init__
[rank0]:     self.engine = self._init_engine(*args, **kwargs)
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/engine/async_llm_engine.py", line 547, in _init_engine
[rank0]:     return engine_class(*args, **kwargs)
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 251, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 47, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 36, in _init_executor
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/worker/worker.py", line 139, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 682, in load_model
[rank0]:     self.model = get_model(model_config=self.model_config,
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 21, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 283, in load_model
[rank0]:     model.load_weights(
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/model_executor/models/deepseek_v2.py", line 533, in load_weights
[rank0]:     weight_loader(param, loaded_weight)
[rank0]:   File "/home/jiayipan/miniconda3/envs/GPML/lib/python3.10/site-packages/vllm/model_executor/model_loader/weight_utils.py", line 468, in default_weight_loader
[rank0]:     assert param.size() == loaded_weight.size(), f"{param.size()}, {loaded_weight.size()}"
[rank0]: AssertionError: torch.Size([1]), torch.Size([])
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]
robertgshaw2-neuralmagic commented 1 month ago

@mgoin could you take a look at this?

robertgshaw2-neuralmagic commented 1 month ago

I think this is an issue with ReplicatedLinear

mgoin commented 1 month ago

Sure, but regardless of this current issue, I believe these are Ampere GPUs - which we don't have support in the FP8 Triton MoE kernel for.

djw-star commented 1 month ago

@mgoin Sorry to bother you. I got the same error when I ran DeepSeek-Coder-V2-Lite-Base-FP8 on two 4090s. My execution command is: vllm serve DeepSeek-Coder-V2-Lite-Base-FP8 --gpu-memory-utilization 0.9 --trust-remote-code --max-model-len 10000 --enable-chunked-prefill=False --tensor-parallel-size 2 --enforce_eager Is it the same reason?

robertgshaw2-neuralmagic commented 1 month ago

Yes, fp8 for MoE needs cc 9.0 and I belive 4090 is 8.9.

We need to wait for pytorch to upgrade to triton 3.0 to support 8.9

cdj0311 commented 1 month ago

the same problem with L20 gpu

freegheist commented 1 week ago

any news on this, considering the new DeepSeek v2.5 release now?

mgoin commented 1 week ago

This should work with the latest release - have you tried vllm 0.6.0 and saw the same issue?

freegheist commented 6 days ago

This should work with the latest release - have you tried vllm 0.6.0 and saw the same issue?

the 0.6.0 docker container gives me this on 8xA6000 Ampere (DeepSeek Coder V2)

docker run --name vllm_container --gpus=all -e VLLM_ENGINE_ITERATION_TIMEOUT_S=1200 -p 7861:8000 --ipc=host --shm-size=32gb -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 -v /srv/syn/models/deploy/instruct/deepseek-ai_DeepSeek-Coder-V2-Instruct:/srv/syn/models/deploy/instruct/deepseek-ai_DeepSeek-Coder-V2-Instruct vllm/vllm-openai:v0.6.0 --host 0.0.0.0 --served-model-name tgi --tensor-parallel-size 8 --max-num-seqs 16 --model /srv/syn/models/deploy/instruct/deepseek-ai_DeepSeek-Coder-V2-Instruct --max-model-len 8192 --max-num-batched-tokens 8192 --trust-remote-code --enforce-eager --quantization fp8

Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, self._kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path) File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 34, in init self.engine = AsyncLLMEngine.from_engine_args( File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 735, in from_engine_args engine = cls( File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 615, in init self.engine = self._init_engine(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 835, in _init_engine return engine_class(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 262, in init super().init(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 305, in init self.model_executor = executor_class( File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 222, in init super().init(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/executor/distributed_gpu_executor.py", line 26, in init super().init(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/executor/executor_base.py", line 47, in init self._init_executor() File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 125, in _init_executor self._run_workers("load_model", File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 199, in _run_workers driver_worker_output = driver_worker_method(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 182, in load_model self.model_runner.load_model() File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 917, in load_model self.model = get_model(model_config=self.model_config, File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/init.py", line 19, in get_model return loader.load_model(model_config=model_config, File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 341, in load_model model = _initialize_model(model_config, self.load_config, File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 170, in _initialize_model return build_model( File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/model_loader/loader.py", line 155, in build_model return model_class(config=hf_config, File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 486, in init self.model = DeepseekV2Model(config, File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 428, in init self.start_layer, self.end_layer, self.layers = make_layers( File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 247, in makelayers [PPMissingLayer() for in range(start_layer)] + [ File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/utils.py", line 248, in maybe_offload_to_cpu(layer_fn(prefix=f"{prefix}.{idx}")) File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 430, in lambda prefix: DeepseekV2DecoderLayer( File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 358, in init self.mlp = DeepseekV2MoE( File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 113, in init self.experts = FusedMoE(num_experts=config.n_routed_experts, File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 215, in init self.quant_method.create_weights( File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/quantization/fp8.py", line 307, in create_weights w13_weight = torch.nn.Parameter(torch.empty(num_experts, File "/usr/local/lib/python3.10/dist-packages/torch/utils/_device.py", line 79, in __torch_function__ return func(args, **kwargs) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 600.00 MiB. GPU 0 has a total capacity of 47.44 GiB of which 237.38 MiB is free. Process 2388939 has 47.20 GiB memory in use. Of the allocated memory 46.57 GiB is allocated by PyTorch, and 180.97 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) ERROR 09-09 01:35:22 api_server.py:186] RPCServer process died before responding to readiness probe

mgoin commented 6 days ago

@freegheist You are loading an fp16 checkpoint and dynamically quantizing it to fp8 after loading. This is running out of memory because you don't have enough memory to hold the whole fp16 checkpoint before the quantization. You need to use an already quantized FP8 checkpoint in order to fit into your system - you should be able to try https://huggingface.co/neuralmagic/DeepSeek-Coder-V2-Instruct-FP8

freegheist commented 5 days ago

@freegheist You are loading an fp16 checkpoint and dynamically quantizing it to fp8 after loading. This is running out of memory because you don't have enough memory to hold the whole fp16 checkpoint before the quantization. You need to use an already quantized FP8 checkpoint in order to fit into your system - you should be able to try https://huggingface.co/neuralmagic/DeepSeek-Coder-V2-Instruct-FP8

Thanks for that info... the error happens quick, didnt seem to OOM on the RAM or SWAP but makes sense!

I'm trying the FP8 now that gives the below error:

docker run --name vllm_container --gpus=all -e VLLM_ENGINE_ITERATION_TIMEOUT_S=1200 -p 7861:8000 --ipc=host --shm-size=8gb -e CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 -v /srv/syn/models/deploy/instruct/neuralmagic_DeepSeek-Coder-V2-Instruct-FP8:/srv/syn/models/deploy/instruct/neuralmagic_DeepSeek-Coder-V2-Instruct-FP8 vllm/vllm-openai:v0.6.0 --host 0.0.0.0 --served-model-name tgi --tensor-parallel-size 8 --max-num-seqs 16 --gpu-memory-utilization 0.9999 --model /srv/syn/models/deploy/instruct/neuralmagic_DeepSeek-Coder-V2-Instruct-FP8 --max-model-len 8192 --max-num-batched-tokens 8192 --trust-remote-code --enforce-eager

(VllmWorkerProcess pid=121) INFO 09-10 00:28:35 model_runner.py:926] Loading model weights took 28.2876 GB ERROR 09-10 00:28:38 multiproc_worker_utils.py:120] Worker VllmWorkerProcess pid 121 died, exit code: -15 INFO 09-10 00:28:38 multiproc_worker_utils.py:123] Killing local vLLM worker processes Process SpawnProcess-1: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, self._kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 236, in run_rpc_server server = AsyncEngineRPCServer(async_engine_args, usage_context, rpc_path) File "/usr/local/lib/python3.10/dist-packages/vllm/entrypoints/openai/rpc/server.py", line 34, in init self.engine = AsyncLLMEngine.from_engine_args( File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 735, in from_engine_args engine = cls( File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 615, in init self.engine = self._init_engine(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 835, in _init_engine return engine_class(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/engine/async_llm_engine.py", line 262, in init super().init(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 319, in init self._initialize_kv_caches() File "/usr/local/lib/python3.10/dist-packages/vllm/engine/llm_engine.py", line 448, in _initialize_kv_caches self.model_executor.determine_num_available_blocks()) File "/usr/local/lib/python3.10/dist-packages/vllm/executor/distributed_gpu_executor.py", line 39, in determine_num_available_blocks num_blocks = self._run_workers("determine_num_available_blocks", ) File "/usr/local/lib/python3.10/dist-packages/vllm/executor/multiproc_gpu_executor.py", line 199, in _run_workers driver_worker_output = driver_worker_method(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/worker/worker.py", line 222, in determine_num_available_blocks self.model_runner.profile_run() File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1133, in profile_run self.execute_model(model_input, kv_caches, intermediate_tensors) File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/worker/model_runner.py", line 1450, in execute_model hidden_or_intermediate_states = model_executable( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 504, in forward hidden_states = self.model(input_ids, positions, kv_caches, File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 461, in forward hidden_states, residual = layer(positions, hidden_states, File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 401, in forward hidden_states = self.mlp(hidden_states) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/deepseek_v2.py", line 148, in forward final_hidden_states = self.experts( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/layer.py", line 442, in forward final_hidden_states = self.quant_method.apply( File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/quantization/fp8.py", line 496, in apply return fused_experts(x, File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 647, in fused_experts moe_align_block_size(curr_topk_ids, config['BLOCK_SIZE_M'], E)) File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/layers/fused_moe/fused_moe.py", line 228, in moe_align_block_size ops.moe_align_block_size(topk_ids, num_experts, block_size, sorted_ids, File "/usr/local/lib/python3.10/dist-packages/vllm/_custom_ops.py", line 29, in wrapper return fn(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/vllm/_custom_ops.py", line 538, in moe_align_block_size torch.ops._C.moe_align_block_size(topk_ids, num_experts, block_size, File "/usr/local/lib/python3.10/dist-packages/torch/ops.py", line 1061, in call return self._op(args, **(kwargs or {})) RuntimeError: CUDA error: invalid argument CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1 Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

ERROR 09-10 00:28:45 api_server.py:186] RPCServer process died before responding to readiness probe