sgl-project / sglang

SGLang is a fast serving framework for large language models and vision language models.
https://sglang.readthedocs.io/en/latest/
Apache License 2.0
5.23k stars 370 forks source link

[Bug] Multi-Node communication issue #836

Open dmakhervaks opened 1 month ago

dmakhervaks commented 1 month ago

Checklist

Describe the bug

I am trying to run a model on 2 nodes, but seeing some issues related to ProcessGroupGloo. Seems like some sort of networking issue?

I am running via your your latest docker image on two separate nodes (v0.2.7-cu121)

I launch the following. python commands on each node docker respectively.

python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B --tp 4 --nccl-init 10.53.1.111:9009 --nnodes 2 --node-rank 0

python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B --tp 4 --nccl-init 10.53.1.111:9009 --nnodes 2 --node-rank 1

image

Reproduction

meta-llama/Meta-Llama-3.1-8B

Environment

Python: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA H100 80GB HBM3
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.1, V12.1.105
CUDA Driver Version: 535.183.01
535.183.01
535.183.01
535.183.01
535.183.01
535.183.01
535.183.01
535.183.01
PyTorch: 2.3.1+cu121
flashinfer: 0.1.2+cu121torch2.3
requests: 2.32.3
tqdm: 4.66.4
numpy: 1.26.4
aiohttp: 3.9.5
fastapi: 0.111.1
hf_transfer: 0.1.8
huggingface_hub: 0.24.3
interegular: 0.3.3
packaging: 24.1
PIL: 10.4.0
psutil: 6.0.0
pydantic: 2.8.2
uvicorn: 0.30.3
uvloop: 0.19.0
zmq: 26.0.3
vllm: 0.5.3.post1
openai: 1.37.1
anthropic: 0.32.0
NVIDIA Topology:
    GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    NIC7    NIC8    NIC9    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X  NV18    NV18    NV18    NV18    NV18    NV18    NV18    PIX PIX SYS SYS SYS SYS SYS SYS SYS SYS 0-11    0       N/A
GPU1    NV18     X  NV18    NV18    NV18    NV18    NV18    NV18    SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS 24-35   2       N/A
GPU2    NV18    NV18     X  NV18    NV18    NV18    NV18    NV18    SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS 36-47   3       N/A
GPU3    NV18    NV18    NV18     X  NV18    NV18    NV18    NV18    SYS SYS SYS SYS PIX SYS SYS SYS SYS SYS 12-23   1       N/A
GPU4    NV18    NV18    NV18    NV18     X  NV18    NV18    NV18    SYS SYS SYS SYS SYS PIX PIX SYS SYS SYS 48-59   4       N/A
GPU5    NV18    NV18    NV18    NV18    NV18     X  NV18    NV18    SYS SYS SYS SYS SYS SYS SYS PIX SYS SYS 72-83   6       N/A
GPU6    NV18    NV18    NV18    NV18    NV18    NV18     X  NV18    SYS SYS SYS SYS SYS SYS SYS SYS PIX SYS 84-95   7       N/A
GPU7    NV18    NV18    NV18    NV18    NV18    NV18    NV18     X  SYS SYS SYS SYS SYS SYS SYS SYS SYS PIX 60-71   5       N/A
NIC0    PIX SYS SYS SYS SYS SYS SYS SYS  X  PIX SYS SYS SYS SYS SYS SYS SYS SYS
NIC1    PIX SYS SYS SYS SYS SYS SYS SYS PIX  X  SYS SYS SYS SYS SYS SYS SYS SYS
NIC2    SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS  X  SYS SYS SYS SYS SYS SYS SYS
NIC3    SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS  X  SYS SYS SYS SYS SYS SYS
NIC4    SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS  X  SYS SYS SYS SYS SYS
NIC5    SYS SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS  X  PIX SYS SYS SYS
NIC6    SYS SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS PIX  X  SYS SYS SYS
NIC7    SYS SYS SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS  X  SYS SYS
NIC8    SYS SYS SYS SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS  X  SYS
NIC9    SYS SYS SYS SYS SYS SYS SYS PIX SYS SYS SYS SYS SYS SYS SYS SYS SYS  X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6
  NIC7: mlx5_7
  NIC8: mlx5_8
  NIC9: mlx5_9

ulimit soft: 1048576
zhyncs commented 1 month ago

cc @Ying1123

merrymercy commented 1 month ago

set environment variable export GLOO_SOCKET_IFNAME=eth0

dmakhervaks commented 1 month ago

@merrymercy after executing the following commands on node 1 and 2 respectively

set environment variable on each: export GLOO_SOCKET_IFNAME=eth0

then execute: GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B --tp 4 --nccl-init 10.53.1.111:9009 --nnodes 2 --node-rank 0 GLOO_SOCKET_IFNAME=eth0 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3.1-8B --tp 4 --nccl-init 10.53.1.111:9009 --nnodes 2 --node-rank 1

this is what I see on node 2:

image