vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.04k stars 3.97k forks source link

[Bug]: error: triton_flash_attention.py #5696

Open taikai-zz opened 3 months ago

taikai-zz commented 3 months ago

Your current environment

Collecting environment information... /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML warnings.warn("Can't initialize NVML") PyTorch version: 2.1.1+git011de5c Is debug build: False CUDA used to build PyTorch: N/A ROCM used to build PyTorch: 6.0.32830-d62f6a171

OS: Ubuntu 20.04.6 LTS (x86_64) GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 Clang version: 17.0.0 (https://github.com/RadeonOpenCompute/llvm-project roc-6.0.0 23483 7208e8d15fbf218deb74483ea8c549c67ca4985e) CMake version: version 3.29.5 Libc version: glibc-2.31

Python version: 3.9.18 (main, Sep 11 2023, 13:41:44) [GCC 11.2.0] (64-bit runtime) Python platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.31 Is CUDA available: True CUDA runtime version: 10.1.243 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: AMD Radeon PRO W6800NoGCNArchNameOnOldPyTorch Nvidia driver version: Could not collect cuDNN version: Could not collect HIP runtime version: 6.0.32830 MIOpen runtime version: 3.0.0 Is XNNPACK available: True

CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 40 bits physical, 48 bits virtual CPU(s): 12 On-line CPU(s) list: 0-11 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 12 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Silver 4214 CPU @ 2.20GHz Stepping: 7 CPU MHz: 2200.031 BogoMIPS: 4400.06 Virtualization: VT-x L1d cache: 384 KiB L1i cache: 384 KiB L2 cache: 48 MiB L3 cache: 192 MiB NUMA node0 CPU(s): 0-11 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Retbleed: Mitigation; Enhanced IBRS Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat umip pku ospke avx512_vnni md_clear arch_capabilities

Versions of relevant libraries: [pip3] mypy==1.4.1 [pip3] mypy-extensions==1.0.0 [pip3] numpy==1.26.4 [pip3] torch==2.1.1+git011de5c [pip3] torchvision==0.16.1+fdea156 [pip3] transformers==4.41.2 [pip3] triton==2.1.0 [conda] No relevant packages ROCM Version: 6.0.32830-d62f6a171 Neuron SDK Version: N/A vLLM Version: 0.5.0.post1 vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: Could not collect

🐛 Describe the bug

from transformers import AutoTokenizer from vllm import LLM, SamplingParams tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-7B-Instruct") Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. sampling_params = SamplingParams(temperature=0.7, top_p=0.8, repetition_penalty=1.05, max_tokens=512) llm = LLM(model="Qwen/Qwen2-7B-Instruct") /opt/conda/envs/py_3.9/lib/python3.9/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( INFO 06-20 00:34:54 llm_engine.py:164] Initializing an LLM engine (v0.5.0.post1) with config: model='Qwen/Qwen2-7B-Instruct', speculative_config=None, tokenizer='Qwen/Qwen2-7B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=Qwen/Qwen2-7B-Instruct) Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. /opt/conda/envs/py_3.9/lib/python3.9/site-packages/torch/cuda/init.py:611: UserWarning: Can't initialize NVML warnings.warn("Can't initialize NVML") INFO 06-20 00:34:55 selector.py:133] flash_attn is not supported on NAVI GPUs. INFO 06-20 00:34:55 selector.py:57] Using ROCmFlashAttention backend. WARNING 06-20 00:34:55 init.py:104] Model architecture Qwen2ForCausalLM is partially supported by ROCm: Sliding window attention is not yet supported in ROCm's flash attention INFO 06-20 00:34:56 selector.py:133] flash_attn is not supported on NAVI GPUs. INFO 06-20 00:34:56 selector.py:57] Using ROCmFlashAttention backend. INFO 06-20 00:34:56 weight_utils.py:218] Using model weights format ['*.safetensors'] INFO 06-20 00:38:45 model_runner.py:160] Loading model weights took 14.2487 GB

error: triton_flash_attention.py:211:0: stack frame size (556488) exceeds limit (262112) in function 'attn_fwd_0d1d2d3de45de6d7de8de9de10c11de12de13de14c15de16de17de18c19de20de21de22c23de24de25de26de27d28d29303132de'

hongxiayang commented 2 months ago

@taikai-zz For your issue, you may need to use the naive attention for your device. Please set the environment variable: VLLM_USE_TRITON_FLASH_ATTN=0

linchen111 commented 1 month ago

@taikai-zz For your issue, you may need to use the naive attention for your device. Please set the environment variable: VLLM_USE_TRITON_FLASH_ATTN=0

naive attention use CPU?

hongxiayang commented 1 month ago

no, it still uses GPU. it uses the pytorch sdpa math attention, backed by AMD GPUs kernels.

linchen111 commented 1 month ago

no, it still uses GPU. it uses the pytorch sdpa math attention, backed by AMD GPUs kernels.不,它仍然使用 GPU。它使用 pytorch sdpa 数学注意力,由 AMD GPU 内核支持。

Thanks, it's super slow in mi50 (no support of flash-attention)