vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
29.75k stars 4.49k forks source link

[Bug]: lm-format-enforcer guided decoding kills MQLLMEngine #8578

Closed joerunde closed 1 month ago

joerunde commented 1 month ago

Your current environment

The output of `python collect_env.py` (vllm code copied from this PR (@84789334a) was used: https://github.com/vllm-project/vllm/pull/8574) ```text Collecting environment information... PyTorch version: 2.4.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Red Hat Enterprise Linux 9.4 (Plow) (x86_64) GCC version: (GCC) 11.4.1 20231218 (Red Hat 11.4.1-3) Clang version: Could not collect CMake version: Could not collect Libc version: glibc-2.34 Python version: 3.11.7 (main, Aug 23 2024, 00:00:00) [GCC 11.4.1 20231218 (Red Hat 11.4.1-3)] (64-bit runtime) Python platform: Linux-4.18.0-372.46.1.el8_6.x86_64-x86_64-with-glibc2.34 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA A100-SXM4-80GB GPU 1: NVIDIA A100-SXM4-80GB Nvidia driver version: 535.104.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 46 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 80 On-line CPU(s) list: 0-79 Vendor ID: GenuineIntel Model name: Intel Xeon Processor (Icelake) CPU family: 6 Model: 134 Thread(s) per core: 2 Core(s) per socket: 20 Socket(s): 2 Stepping: 0 BogoMIPS: 5600.05 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear arch_capabilities Virtualization: VT-x Hypervisor vendor: KVM Virtualization type: full L1d cache: 2.5 MiB (80 instances) L1i cache: 2.5 MiB (80 instances) L2 cache: 160 MiB (40 instances) L3 cache: 32 MiB (2 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-39 NUMA node1 CPU(s): 40-79 Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown Vulnerability Retbleed: Not affected Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] flashinfer==0.1.2+cu121torch2.4 [pip3] numpy==1.26.4 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==9.1.0.70 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.68 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] pyzmq==26.2.0 [pip3] sentence-transformers==3.1.0 [pip3] torch==2.4.0 [pip3] torchvision==0.19.0 [pip3] transformers==4.44.2 [pip3] transformers-stream-generator==0.0.5 [pip3] triton==3.0.0 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.1.post2@caa1aa85fe9dc91aff25786a4efacff951aa7a4d vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 GPU1 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X NV12 NODE 0-39 0 N/A GPU1 NV12 X SYS 40-79 1 N/A NIC0 NODE SYS X Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge NV# = Connection traversing a bonded set of # NVLinks NIC Legend: NIC0: mlx5_0 ```

Model Input Dumps

No response

🐛 Describe the bug

I was benchmarking the performance of guided decoding using the lm-format-enforcer backend.

Here's the artillery snippet:

config:
  timeout: 100
  target: http://rundemc-dev-service:8000
  phases:
    - duration: 300
      arrivalRate: 1
      name: Load test

  payload:
    # path is relative to the location of the test script
    path: 'payloads.csv'
    fields:
      - prompt
    name: unused

  variables:
    model_id:
      - "mistralai/Mistral-7B-Instruct-v0.2"
    backend:
      - "lm-format-enforcer"

scenarios:
  - name: Test completions
    flow:
      - post:
          url: "/v1/completions"
          json:
            model: "{{ model_id }}"
            prompt: "{{ prompt }}"
            max_tokens: 40
      - post:
          url: "/v1/completions"
          json:
            model: "{{ model_id }}"
            prompt: "{{ prompt }}"
            max_tokens: 40
            guided_decoding_backend: "{{ backend }}"
            guided_choice:
              - "foo"
              - "bar"
              - "baz"
              - "buzz"
      - post:
          url: "/v1/completions"
          json:
            model: "{{ model_id }}"
            prompt: "{{ prompt }}"
            max_tokens: 40
            guided_decoding_backend: "{{ backend }}"
            response_format:
              type: "json_object"
      - post:
          url: "/v1/completions"
          json:
            model: "{{ model_id }}"
            prompt: "{{ prompt }}"
            max_tokens: 40
            guided_decoding_backend: "{{ backend }}"
            guided_json:
              type: "object"
              properties:
                name:
                  type: string
                age:
                  type: integer

Where payloads.csv has some random short texts (<100 input tokens each)

The server eventually crashes due to the health check timing out:

ERROR 09-18 21:04:43 client.py:261] TimeoutError("MQLLMEngine didn't reply within 10000ms")
ERROR 09-18 21:04:43 client.py:261] Traceback (most recent call last):
ERROR 09-18 21:04:43 client.py:261]   File "/workspace/my-vllm/lib64/python3.11/site-packages/vllm/engine/multiprocessing/client.py", line 157, in run_check_health_loop
ERROR 09-18 21:04:43 client.py:261]     await self._await_ack(error_message="Health check failed.",
ERROR 09-18 21:04:43 client.py:261]   File "/workspace/my-vllm/lib64/python3.11/site-packages/vllm/engine/multiprocessing/client.py", line 308, in _await_ack
ERROR 09-18 21:04:43 client.py:261]     raise TimeoutError("MQLLMEngine didn't reply within "
ERROR 09-18 21:04:43 client.py:261] TimeoutError: MQLLMEngine didn't reply within 10000ms
CRITICAL 09-18 21:04:44 launcher.py:99] MQLLMEngine is already dead, terminating server process
INFO:     10.131.3.24:42640 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
CRITICAL 09-18 21:04:44 launcher.py:99] MQLLMEngine is already dead, terminating server process
INFO:     10.131.3.24:42640 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
CRITICAL 09-18 21:04:44 launcher.py:99] MQLLMEngine is already dead, terminating server process
INFO:     10.131.3.24:42640 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
CRITICAL 09-18 21:04:44 launcher.py:99] MQLLMEngine is already dead, terminating server process
INFO:     10.131.3.24:42640 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
INFO:     Shutting down
INFO:     Waiting for connections to close. (CTRL+C to force quit)

I then added a little print statement to see how long self.engine_step() takes in the MQLLMEngine, and it looks like every now and then a step takes multiple seconds, where it's usually sub-second. Maybe this is because of large amounts of prefill happening? Something is taking quite a long time though and this causes the engine to not be able to respond to a healthcheck since inputs from the client are read serially after each step.

e.g. I see this printing out:

(really long pause here)
         @@@ Engine step took: 9.088980913162231s @@@
         @@@ Engine step took: 0.05881190299987793s @@@
         @@@ Engine step took: 0.12871074676513672s @@@
         @@@ Engine step took: 0.12064170837402344s @@@
         @@@ Engine step took: 0.09240460395812988s @@@

I can dig in more, but it seems not great that it's this easy to knock over the server

Before submitting a new issue...

joerunde commented 1 month ago

After looking into this more, it appears that the logits processors coming over the zmq wire are roughly 4MB. They seem to usually unpack in <1sec, but under load the calls to cloudpickle.loads() sometimes take longer, and appear to block up the GIL while doing so. Because the gil is blocked, even solutions like #8583 do not fix the problem of the client losing connection with the engine and exiting.

I don't know why unpickling the lp is sometimes slow, maybe reading the bytes from the buffer is slow under load when there's 1GB+ coming into the socket, maybe there's contention with other inference work going on, maybe cloudpickle drops the ball? Unclear so far