vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
28.76k stars 4.26k forks source link

[Bug]: TimeoutError: MQLLMEngine didn't reply within 10000ms #8836

Open Quang-elec44 opened 3 weeks ago

Quang-elec44 commented 3 weeks ago

Your current environment

The output of `python collect_env.py` ```text Collecting environment information... INFO 09-26 04:45:26 importing.py:10] Triton not installed; certain GPU-related functions will not be available. PyTorch version: 2.4.0+cpu Is debug build: False CUDA used to build PyTorch: None ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.5 LTS (x86_64) GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0 Clang version: Could not collect CMake version: version 3.30.3 Libc version: glibc-2.35 Python version: 3.10.12 (main, Sep 11 2024, 15:47:36) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.8.0-1016-aws-x86_64-with-glibc2.35 Is CUDA available: False CUDA runtime version: No CUDA CUDA_MODULE_LOADING set to: N/A GPU models and configuration: No CUDA Nvidia driver version: No CUDA cuDNN version: No CUDA HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 48 bits physical, 48 bits virtual Byte Order: Little Endian CPU(s): 48 On-line CPU(s) list: 0-47 Vendor ID: AuthenticAMD Model name: AMD EPYC 7R32 CPU family: 23 Model: 49 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 1 Stepping: 0 BogoMIPS: 5599.99 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy cr8_legacy abm sse4a misalignsse 3dnowprefetch topoext ssbd ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 clzero xsaveerptr rdpru wbnoinvd arat npt nrip_save rdpid Hypervisor vendor: KVM Virtualization type: full L1d cache: 768 KiB (24 instances) L1i cache: 768 KiB (24 instances) L2 cache: 12 MiB (24 instances) L3 cache: 96 MiB (6 instances) NUMA node(s): 1 NUMA node0 CPU(s): 0-47 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Reg file data sampling: Not affected Vulnerability Retbleed: Mitigation; untrained return thunk; SMT enabled with STIBP protection Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP always-on; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] intel_extension_for_pytorch==2.4.0+gitfbaa4bc [pip3] numpy==1.26.4 [pip3] pyzmq==26.2.0 [pip3] torch==2.4.0+cpu [pip3] torchvision==0.19.0+cpu [pip3] transformers==4.44.2 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: 0.6.1.post2@18ae428a0d8792d160d811a9cd5bb004d68ea8bd vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: Could not collect ```

Model Input Dumps

No response

🐛 Describe the bug

# docker-compose.yml
services:
  llm-vllm-cpu:
    image: vllm/vllm-openai:cpu
    container_name: llm-vllm-cpu
    restart: unless-stopped
    environment:
      HUGGING_FACE_HUB_TOKEN: <my-token>
    ports:
      - "8007:8007"
    deploy:
      resources:
        limits:
          cpus: "28"
          memory: 24GB
    ipc: host
    volumes:
      - ~/.cache/huggingface:/root/.cache/huggingface
    command: >
      --host 0.0.0.0
      --port 8007
      --api-key <my-api-key>
      --max-model-len 2048
      --served-model-name llama3.2
      --seed 42
      --device cpu
      --dtype bfloat16
      --disable-log-requests
      --model meta-llama/Llama-3.2-1B-Instruct

Before submitting a new issue...

joerunde commented 3 days ago

@Quang-elec44 I think we'd need some logs or more info to know what went wrong here.

It's a known issue that requests can crash the MQLLMEngine, and in the case where no other requests are received by the server after a crash, the engine will silently quit and eventually the server will shut itself down when it stops hearing heartbeats from the engine. This open PR fixes that situation by causing the server to shut down immediately when the engine crashes: https://github.com/vllm-project/vllm/pull/9023