Closed tristandevs closed 2 months ago
Attempting to set PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
gives error Warning: expandable_segments not supported on this platform (function operator())
even with nightly pytorch build.
you need to pass correct quantization method to vllm. it looks like it is loading the model as if it is a normal qwen 2 model.
May I ask what is the "lower GPU utilization" you use here? I have an ongoing PR trying to explain why we should try to set GPU_memory_utilization as small as possible: #8541 -- basically, GPU_memory_utilization only accounts for model_weight + KV_cache + CUDA Graph, and does not count running-time activation memory. I once came across this problem and I set GPU_memory_utilization to be as small as just a bit over model_weight (0.5 roughly for 70B Llama-3 fp16 model on 4A100 80G cards. A simple calculation: 70 2 = 140 GB needed, which is smaller than 80 4 0.5 = 160 GB) and no more OOM happened. It seems your model is some Qwen2 70B model quantized version by checking the model config on Huggingface, and I think under int4, you need at least 140/4=35GB to host the model.
By the way, it seems a bit weird that you use an int-4 quantized model with bf16 precision. Also, when you run your model on a 4*A100 card, did you set tensor_parallel_size=4
to use 4 GPUs?
Passing quantization gptq works.
The lower utilization I used was passing --gpu-memory-utilization 0.80.
When testing the 4x A100 I forgot to pass the right tensor size.
Thank you!
No other GPU processes were running and I set tensor_parallel_size
correctly, but still got the same OOM error from the same line weight = Parameter(torch.empty(sum(output_partition_sizes)
. After I set --enforce-eager=False
to capture CUDA graphs, the problem seems to go away.
Your current environment
The output of `python collect_env.py`
```text Your output of `python collect_env.py` here ``` GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.29.3 Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.8.0-1014-azure-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.4.131 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100 80GB PCIe Nvidia driver version: 555.42.06 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 24 On-line CPU(s) list: 0-23 Vendor ID: AuthenticAMD Model name: AMD EPYC 7V13 64-Core Processor CPU family: 25 Model: 1 Thread(s) per core: 1 Core(s) per socket: 24 Socket(s): 1 Stepping: 1 BogoMIPS: 4890.87 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc r ep_good nopl tsc_reliable nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves user_shstk clzero x saveerptr rdpru arat umip vaes vpclmulqdq rdpid fsrm Hypervisor vendor: Microsoft Virtualization type: full L1d cache: 768 KiB (24 instances) L1i cache: 768 KiB (24 instances) L2 cache: 12 MiB (24 instances) L3 cache: 96 MiB (3 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-23 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode Vulnerability Spec store bypass: Vulnerable Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.550.52 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.4.127 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.0.3 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.45.0.dev0 [pip3] triton==3.0.0 [pip3] vllm-nccl-cu12==2.18.1.0.4.0 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.1.post2@9ba0817ff1eb514f51cc6de9cb8e16c98d6ee44f vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X SYS 0-23 0 N/A NIC0 SYS X Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks NIC Legend: NIC0: mlx5_0Model Input Dumps
No response
🐛 Describe the bug
When I run
python3 -m vllm.entrypoints.openai.api_server --model shuttleai/shuttle-3-GPTQ-Int4 --dtype bfloat16 --api-key 123456 --tensor-parallel-size 1 --gpu-memory-utilization 0.90 --served-model-name shuttle-2.5
, I get an OOM error.I am pretty sure my GPU memory is enough to hold the model. The error happens before the model even started downloading, and I have tested it on my other a100 server, and another 4x a100 server and still get the same error.
I have tried setting
--max-num-seqs 32 --max-model-len 4096
and lowering the GPU utilization, however, I still get the same error.Before submitting a new issue...