sgl-project / sglang

SGLang is a fast serving framework for large language models and vision language models.
https://sglang.readthedocs.io/en/latest/
Apache License 2.0
5.69k stars 450 forks source link

[Bug] Cannot run `microsoft/Phi-3.5-mini-instruct`; Capture cuda graph failed #1751

Closed HuanzhiMao closed 1 hour ago

HuanzhiMao commented 1 hour ago

Checklist

Describe the bug

When running microsoft/Phi-3.5-mini-instruct on 1x H100, sglang gives the following error.

Exception: Capture cuda graph failed: BatchDecodeWithPagedKVCachePyTorchWrapper::Plan(at::Tensor, at::Tensor, at::Tensor, at::Tensor, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, float, at::Tensor, at::Tensor)::<lambda()>::<lambda()>::<lambda()> failed to dispatch head_dim 96

Full terminal output:

$ python -m sglang.launch_server --model-path microsoft/Phi-3.5-mini-instruct
2024-10-22 06:31:31.858362: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-10-22 06:31:31.874615: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-10-22 06:31:31.897134: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8463] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-10-22 06:31:31.903828: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-10-22 06:31:31.919069: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI, in other operations, rebuild TensorFlow with the appropriate compiler flags.
/usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.4
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
[2024-10-22 06:31:36] server_args=ServerArgs(model_path='microsoft/Phi-3.5-mini-instruct', tokenizer_path='microsoft/Phi-3.5-mini-instruct', tokenizer_mode='auto', skip_tokenizer_init=False, load_format='auto', trust_remote_code=False, dtype='auto', kv_cache_dtype='auto', quantization=None, context_length=None, device='cuda', served_model_name='microsoft/Phi-3.5-mini-instruct', chat_template=None, is_embedding=False, host='127.0.0.1', port=30000, mem_fraction_static=0.88, max_running_requests=None, max_total_tokens=None, chunked_prefill_size=8192, max_prefill_tokens=16384, schedule_policy='lpm', schedule_conservativeness=1.0, tp_size=1, stream_interval=1, random_seed=33262822, constrained_json_whitespace_pattern=None, log_level='info', log_level_http=None, log_requests=False, show_time_cost=False, api_key=None, file_storage_pth='SGLang_storage', enable_cache_report=False, dp_size=1, load_balance_method='round_robin', dist_init_addr=None, nnodes=1, node_rank=0, json_model_override_args='{}', enable_double_sparsity=False, ds_channel_config_path=None, ds_heavy_channel_num=32, ds_heavy_token_num=256, ds_heavy_channel_type='qk', ds_sparse_decode_threshold=4096, lora_paths=None, max_loras_per_batch=8, attention_backend='flashinfer', sampling_backend='flashinfer', disable_flashinfer=False, disable_flashinfer_sampling=False, disable_radix_cache=False, disable_regex_jump_forward=False, disable_cuda_graph=False, disable_cuda_graph_padding=False, disable_disk_cache=False, disable_custom_all_reduce=False, disable_mla=False, disable_penalizer=False, disable_nan_detection=False, enable_overlap_schedule=False, enable_mixed_chunk=False, enable_torch_compile=False, max_torch_compile_bs=32, torchao_config='', enable_p2p_check=False, triton_attention_reduce_in_fp32=False, num_continuous_decode_steps=1)
/usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.4
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
/usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.4
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
[2024-10-22 06:31:45 TP0] Init torch distributed begin.
[2024-10-22 06:31:46 TP0] Load weight begin. avail mem=78.65 GB
INFO 10-22 06:31:46 config.py:107] Replacing legacy 'type' key with 'rope_type'
[2024-10-22 06:31:47 TP0] lm_eval is not installed, GPTQ may not be usable
INFO 10-22 06:31:47 weight_utils.py:243] Using model weights format ['*.safetensors']
Loading safetensors checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]
Loading safetensors checkpoint shards:  50% Completed | 1/2 [00:00<00:00,  1.32it/s]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:01<00:00,  1.76it/s]
Loading safetensors checkpoint shards: 100% Completed | 2/2 [00:01<00:00,  1.67it/s]

[2024-10-22 06:31:49 TP0] Load weight end. type=Phi3ForCausalLM, dtype=torch.bfloat16, avail mem=71.38 GB
[2024-10-22 06:31:49 TP0] Memory pool end. avail mem=8.38 GB
[2024-10-22 06:31:49 TP0] Capture cuda graph begin. This can take up to several minutes.
[2024-10-22 06:31:49 TP0] Traceback (most recent call last):
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/model_executor/cuda_graph_runner.py", line 162, in __init__
    self.capture()
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/model_executor/cuda_graph_runner.py", line 212, in capture
    ) = self.capture_one_batch_size(bs, forward)
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/model_executor/cuda_graph_runner.py", line 233, in capture_one_batch_size
    self.model_runner.attn_backend.init_forward_metadata_capture_cuda_graph(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/layers/attention/flashinfer_backend.py", line 187, in init_forward_metadata_capture_cuda_graph
    self.indices_updater_decode.update(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/layers/attention/flashinfer_backend.py", line 352, in update_single_wrapper
    self.call_begin_forward(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/layers/attention/flashinfer_backend.py", line 452, in call_begin_forward
    wrapper.begin_forward(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/flashinfer/decode.py", line 543, in plan
    self._wrapper.plan(
RuntimeError: BatchDecodeWithPagedKVCachePyTorchWrapper::Plan(at::Tensor, at::Tensor, at::Tensor, at::Tensor, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, float, at::Tensor, at::Tensor)::<lambda()>::<lambda()>::<lambda()> failed to dispatch head_dim 96

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 1128, in run_scheduler_process
    scheduler = Scheduler(server_args, port_args, gpu_id, tp_rank, dp_rank)
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 155, in __init__
    self.tp_worker = TpWorkerClass(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/managers/tp_worker.py", line 55, in __init__
    self.model_runner = ModelRunner(
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/model_executor/model_runner.py", line 166, in __init__
    self.init_cuda_graphs()
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/model_executor/model_runner.py", line 557, in init_cuda_graphs
    self.cuda_graph_runner = CudaGraphRunner(self)
  File "/home/ubuntu/.local/lib/python3.10/site-packages/sglang/srt/model_executor/cuda_graph_runner.py", line 164, in __init__
    raise Exception(
Exception: Capture cuda graph failed: BatchDecodeWithPagedKVCachePyTorchWrapper::Plan(at::Tensor, at::Tensor, at::Tensor, at::Tensor, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, float, at::Tensor, at::Tensor)::<lambda()>::<lambda()>::<lambda()> failed to dispatch head_dim 96
Possible solutions:
1. disable cuda graph by --disable-cuda-graph
2. set --mem-fraction-static to a smaller value (e.g., 0.8 or 0.7)
3. disable torch compile by not using --enable-torch-compile
Open an issue on GitHub https://github.com/sgl-project/sglang/issues/new/choose 

/usr/lib/python3.10/multiprocessing/resource_tracker.py:104: UserWarning: resource_tracker: process died unexpectedly, relaunching.  Some resources might leak.
  warnings.warn('resource_tracker: process died unexpectedly, '
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/resource_tracker.py", line 209, in main
    cache[rtype].remove(name)
KeyError: '/mp-8f_f5ecf'
Killed

Reproduction

python -m sglang.launch_server --model-path microsoft/Phi-3.5-mini-instruct

Environment

2024-10-22 06:15:31.990529: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-10-22 06:15:32.007851: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-10-22 06:15:32.030825: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8463] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-10-22 06:15:32.037602: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-10-22 06:15:32.053158: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX512F AVX512_VNNI AVX512_BF16 AVX512_FP16 AVX_VNNI, in other operations, rebuild TensorFlow with the appropriate compiler flags.
/usr/lib/python3/dist-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.17.3 and <1.25.0 is required for this version of SciPy (detected version 1.26.4
  warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
Python: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0]
CUDA available: True
GPU 0: NVIDIA H100 PCIe
GPU 0 Compute Capability: 9.0
CUDA_HOME: /usr
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.90.12
PyTorch: 2.4.0+cu121
sglang: 0.3.4.post1
flashinfer: 0.1.6+cu124torch2.4
triton: 3.0.0
transformers: 4.45.2
requests: 2.32.3
tqdm: 4.66.5
numpy: 1.26.4
aiohttp: 3.10.10
fastapi: 0.115.2
hf_transfer: 0.1.8
huggingface_hub: 0.26.1
interegular: 0.3.3
packaging: 21.3
PIL: 10.4.0
psutil: 5.9.0
pydantic: 2.9.2
uvicorn: 0.32.0
uvloop: 0.21.0
zmq: 22.3.0
vllm: 0.6.3.post1
multipart: 0.0.12
openai: 1.46.0
anthropic: 0.31.1
NVIDIA Topology: 
        GPU0    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      PHB     0-25    0               N/A
NIC0    PHB      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

Hypervisor vendor: KVM
ulimit soft: 1048576
ByronHsu commented 1 hour ago

This is related to https://github.com/flashinfer-ai/flashinfer/issues/528. Before the issue is fixed from the flashinfer side, can you try --disable-flashinfer, so this will fallback to our triton backend?