sgl-project / sglang

SGLang is a fast serving framework for large language models and vision language models.
https://sglang.readthedocs.io/en/latest/
Apache License 2.0
5.65k stars 443 forks source link

[Bug] Using 8 H20 GPUs, the deepseek-coder-v2-fp8 starts up normally, but there is no response to client requests. #1329

Closed fengyang95 closed 1 month ago

fengyang95 commented 1 month ago

Checklist

Describe the bug

server:

截屏2024-09-04 下午8 52 35

Reproduction

server:

python3 -m sglang.launch_server --model  models--neuralmagic--DeepSeek-Coder-V2-Instruct-FP8  --enable-mla --trust-remote-code --quantization fp8 --mem-frac 0.95 --tp 8 --kv-cache-dtype fp8_e5m2 --port 9379 --disable-radix --chunked-prefill-size 1024 --max-running-requests 32 --context-length 16384

curl:

curl http://localhost:9379/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "pls write quick sort"}
        ],
        "stream": true,
        "model":"dsv2"
      }'

Environment

Python: 3.11.2 (main, Jul 23 2024, 17:09:09) [GCC 10.2.1 20210110] CUDA available: True GPU 0,1,2,3,4,5,6,7: NVIDIA H20 GPU 0,1,2,3,4,5,6,7 Compute Capability: 9.0 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.4, V12.4.131 CUDA Driver Version: 535.161.08 PyTorch: 2.4.0+cu121 sglang: 0.3.0 flashinfer: 0.1.6+cu124torch2.4 triton: 3.0.0 transformers: 4.44.2 requests: 2.32.3 tqdm: 4.66.5 numpy: 1.26.4 aiohttp: 3.10.5 fastapi: 0.112.2 hf_transfer: 0.1.8 huggingface_hub: 0.24.6 interegular: 0.3.3 packaging: 24.1 PIL: 10.4.0 psutil: 6.0.0 pydantic: 2.8.2 uvicorn: 0.30.6 uvloop: 0.20.0 zmq: 26.2.0 vllm: 0.5.5 multipart: 0.0.9 openai: 1.43.0 anthropic: 0.34.1 NVIDIA Topology: GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 NIC2 NIC3 NIC4 NIC5 NIC6 NIC7 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV18 NV18 NV18 NV18 NV18 NV18 NV18 PIX NODE NODE NODE SYS SYS SYS SYS 1-47,97-143 0N/A GPU1 NV18 X NV18 NV18 NV18 NV18 NV18 NV18 NODE PIX NODE NODE SYS SYS SYS SYS 1-47,97-143 0N/A GPU2 NV18 NV18 X NV18 NV18 NV18 NV18 NV18 NODE NODE PIX NODE SYS SYS SYS SYS 1-47,97-143 0N/A GPU3 NV18 NV18 NV18 X NV18 NV18 NV18 NV18 NODE NODE NODE PIX SYS SYS SYS SYS 1-47,97-143 0N/A GPU4 NV18 NV18 NV18 NV18 X NV18 NV18 NV18 SYS SYS SYS SYS PIX NODE NODE NODE 49-94,145-190 1N/A GPU5 NV18 NV18 NV18 NV18 NV18 X NV18 NV18 SYS SYS SYS SYS NODE PIX NODE NODE 49-94,145-190 1N/A GPU6 NV18 NV18 NV18 NV18 NV18 NV18 X NV18 SYS SYS SYS SYS NODE NODE PIX NODE 49-94,145-190 1N/A GPU7 NV18 NV18 NV18 NV18 NV18 NV18 NV18 X SYS SYS SYS SYS NODE NODE NODE PIX 49-94,145-190 1N/A NIC0 PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE SYS SYS SYS SYS NIC1 NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE SYS SYS SYS SYS NIC2 NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE SYS SYS SYS SYS NIC3 NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X SYS SYS SYS SYS NIC4 SYS SYS SYS SYS PIX NODE NODE NODE SYS SYS SYS SYS X NODE NODE NODE NIC5 SYS SYS SYS SYS NODE PIX NODE NODE SYS SYS SYS SYS NODE X NODE NODE NIC6 SYS SYS SYS SYS NODE NODE PIX NODE SYS SYS SYS SYS NODE NODE X NODE NIC7 SYS SYS SYS SYS NODE NODE NODE PIX SYS SYS SYS SYS NODE NODE NODE X

Legend:

X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_1 NIC1: mlx5_2 NIC2: mlx5_3 NIC3: mlx5_4 NIC4: mlx5_5 NIC5: mlx5_6 NIC6: mlx5_7 NIC7: mlx5_8

ulimit soft: 1024768

zhyncs commented 1 month ago

I’ll take a look.

zhyncs commented 1 month ago

Because H20 is exclusive to the mainland, I couldn't find the corresponding machine configuration on RunPod. I plan to test with H100 NVL, which has 94GB of VRAM, very close to the 96GB of H20.

zhyncs commented 1 month ago

It works well on H100 NVL x 8.

# server
python3 -m sglang.launch_server --model  neuralmagic/DeepSeek-Coder-V2-Instruct-FP8  --enable-mla --trust-remote-code --quantization fp8 --mem-frac 0.95 --tp 8 --kv-cache-dtype fp8_e5m2 --port 9379 --disable-radix --chunked-prefill-size 1024 --max-running-requests 32 --context-length 16384

# stream true or false both ok
curl http://localhost:9379/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "pls write quick sort"}
        ],
        "stream": true,
        "model":"dsv2"
      }'

curl http://localhost:9379/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "pls write quick sort"}
        ],
        "stream": false,
        "model":"dsv2"
      }'
fengyang95 commented 1 month ago

It works well on H100 NVL x 8.

# server
python3 -m sglang.launch_server --model  neuralmagic/DeepSeek-Coder-V2-Instruct-FP8  --enable-mla --trust-remote-code --quantization fp8 --mem-frac 0.95 --tp 8 --kv-cache-dtype fp8_e5m2 --port 9379 --disable-radix --chunked-prefill-size 1024 --max-running-requests 32 --context-length 16384

# stream true or false both ok
curl http://localhost:9379/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "pls write quick sort"}
        ],
        "stream": true,
        "model":"dsv2"
      }'

curl http://localhost:9379/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
        "messages": [
          {"role": "system", "content": "You are a helpful assistant."},
          {"role": "user", "content": "pls write quick sort"}
        ],
        "stream": false,
        "model":"dsv2"
      }'

@zhyncs After reinstalled PyTorch following the commands below, it solved the problem. It seems that something like the CUDA environment had an impact.

pip3 install torch==2.4.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124 --force-reinstall