sgl-project / sglang

SGLang is a fast serving framework for large language models and vision language models.
https://sgl-project.github.io/
Apache License 2.0
5.92k stars 482 forks source link

[Bug] FlashInfer support for <=sm_75 #931

Closed horiacristescu closed 1 month ago

horiacristescu commented 3 months ago

Checklist

Describe the bug

Can't use sglang with flashinfer if you have sm_75 or lower. Not even recompiling. Better put up this information so people don't waste time trying to make it work.

Reproduction

simply trying to use it without --disable-flashinfer --disable-flashinfer-sampling causes a crash

Environment

Python: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0]
CUDA available: True
GPU 0: Tesla T4
CUDA_HOME: /usr/local/cuda-12.6
NVCC: Cuda compilation tools, release 12.6, V12.6.20
CUDA Driver Version: 535.54.03
PyTorch: 2.3.1+cu121
sglang: 0.2.10
flashinfer: 0.1.3
triton: 2.3.1
requests: 2.32.3
tqdm: 4.66.4
numpy: 1.26.4
aiohttp: 3.9.5
fastapi: 0.112.0
hf_transfer: 0.1.8
huggingface_hub: 0.24.5
interegular: 0.3.3
packaging: 24.0
PIL: 10.3.0
psutil: 5.9.8
pydantic: 2.7.1
uvicorn: 0.30.5
uvloop: 0.19.0
zmq: 26.0.3
vllm: 0.5.3.post1
multipart: 0.0.9
openai: 1.38.0
anthropic: 0.32.0
NVIDIA Topology: 
    GPU0    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X  SYS 0-15        N/A     N/A
NIC0    SYS  X              

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

ulimit soft: 1024
zhyncs commented 3 months ago

Hi @horiacristescu This might be expected, although FlashInfer supports sm70 sm75, it mainly focuses on sm80+. We currently also recommend when using architectures below sm80, enabling the --disable-flashinfer --disable-flashinfer-sampling parameters. We apologize for any inconvenience caused and thank you for your understanding. cc @yzh119

yawzhe commented 3 months ago

启用这些--disable-flashinfer --disable-flashinfer-sampling参数,是直接 python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --host 0.0.0.0 --port 30000 --disable-flashinfer True --disable-flashinfer-sampling True这样设置吗

zhyncs commented 3 months ago
python3 -m sglang.launch_server --model-path meta-llama/Meta-Llama-3-8B-Instruct --host 0.0.0.0 --port 30000 --disable-flashinfer --disable-flashinfer-sampling
yawzhe commented 3 months ago

不行,报错,No module named flashinfer,因为服务器不支持所以我确实没安装flashinfer

zhyncs commented 3 months ago

FlashInfer is one of the important dependencies of SGLang, you must install it to use. Disabling FlashInfer merely means not using it. It doesn't mean you can avoid installing FlashInfer.

FlashInfer supports

Python: 3.8, 3.9, 3.10, 3.11

PyTorch: 2.2/2.3/2.4 with CUDA 11.8/12.1/12.4 (only for torch 2.4)

https://docs.flashinfer.ai/installation.html

zhyncs commented 1 month ago

Currently, SGLang already supports sm75, such as T4. Welcome to try the latest version. We currently do not have plans to support sm70. Thanks!