vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.9k stars 4.12k forks source link

[Bug]: Can not run openapi server with cpu backend #4403

Closed kannon92 closed 5 months ago

kannon92 commented 5 months ago

Your current environment

Collecting environment information...
PyTorch version: 2.2.1+cpu
Is debug build: False
CUDA used to build PyTorch: None
ROCM used to build PyTorch: N/A

OS: Fedora Linux 40 (Workstation Edition) (x86_64)
GCC version: (GCC) 14.0.1 20240411 (Red Hat 14.0.1-0)
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.39

Python version: 3.11.8 (main, Feb 28 2024, 00:00:00) [GCC 14.0.1 20240217 (Red Hat 14.0.1-0)] (64-bit runtime)
Python platform: Linux-6.8.7-300.fc40.x86_64-x86_64-with-glibc2.39
Is CUDA available: False
CUDA runtime version: No CUDA
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: No CUDA
Nvidia driver version: No CUDA
cuDNN version: No CUDA
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        39 bits physical, 48 bits virtual
Byte Order:                           Little Endian
CPU(s):                               16
On-line CPU(s) list:                  0-15
Vendor ID:                            GenuineIntel
Model name:                           11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz
CPU family:                           6
Model:                                141
Thread(s) per core:                   2
Core(s) per socket:                   8
Socket(s):                            1
Stepping:                             1
CPU(s) scaling MHz:                   29%
CPU max MHz:                          4800.0000
CPU min MHz:                          800.0000
BogoMIPS:                             4992.00
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l2 cdp_l2 ssbd ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves split_lock_detect user_shstk dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp hwp_pkg_req vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq rdpid movdiri movdir64b fsrm avx512_vp2intersect md_clear ibt flush_l1d arch_capabilities
Virtualization:                       VT-x
L1d cache:                            384 KiB (8 instances)
L1i cache:                            256 KiB (8 instances)
L2 cache:                             10 MiB (8 instances)
L3 cache:                             24 MiB (1 instance)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-15
Vulnerability Gather data sampling:   Mitigation; Microcode
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] mypy==1.9.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] torch==2.2.1+cpu
[pip3] triton==2.3.0
[conda] Could not collectROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
Could not collect

🐛 Describe the bug

I am having trouble running the openapi server on CPU. I can't confirm that this works for GPU.

I follow instructions for building the code and then run the openapiserver:

python3 -m vllm.entrypoints.openai.api_server --model microsoft/phi-2

output:

python3 -m vllm.entrypoints.openai.api_server --model microsoft/phi-2
INFO 04-26 16:59:01 api_server.py:151] vLLM API server version 0.4.1
INFO 04-26 16:59:01 api_server.py:152] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, served_model_name=None, lora_modules=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='microsoft/phi-2', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=5, disable_log_stats=False, quantization=None, enforce_eager=False, max_context_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', max_cpu_loras=None, device='auto', image_input_type=None, image_token_id=None, image_input_shape=None, image_feature_size=None, scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_max_model_len=None, model_loader_extra_config=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
INFO 04-26 16:59:02 llm_engine.py:98] Initializing an LLM engine (v0.4.1) with config: model='microsoft/phi-2', speculative_config=None, tokenizer='microsoft/phi-2', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=2048, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=Falsequantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
WARNING 04-26 16:59:02 cpu_executor.py:128] float16 is not supported on CPU, casting to bfloat16.
WARNING 04-26 16:59:02 cpu_executor.py:131] CUDA graph is not supported on CPU, fallback to the eager mode.
WARNING 04-26 16:59:02 cpu_executor.py:159] Environment variable VLLM_CPU_KVCACHE_SPACE (GB) for CPU backend is not set, using 4 by default.
INFO 04-26 16:59:02 selector.py:43] Using Torch SDPA backend.
[W ProcessGroupGloo.cpp:721] Warning: Unable to resolve hostname to a (local) address. Using the loopback address as fallback. Manually set the network interface to bind to with GLOO_SOCKET_IFNAME. (function operator())
INFO 04-26 16:59:03 weight_utils.py:193] Using model weights format ['*.safetensors']
INFO 04-26 16:59:03 cpu_executor.py:72] # CPU blocks: 819
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
WARNING 04-26 16:59:04 serving_chat.py:346] No chat template provided. Chat API will not work.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO:     Started server process [518250]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
--- Logging error ---
Traceback (most recent call last):
  File "/usr/lib64/python3.11/logging/__init__.py", line 1110, in emit
    msg = self.format(record)
          ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/logging/__init__.py", line 953, in format
    return fmt.format(record)
           ^^^^^^^^^^^^^^^^^^
  File "/home/kehannon/Work/LLMs/vllm/vll311/lib64/python3.11/site-packages/vllm-0.4.1+cpu-py3.11-linux-x86_64.egg/vllm/logger.py", line 24, in format
    msg = logging.Formatter.format(self, record)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/logging/__init__.py", line 687, in format
    record.message = record.getMessage()
                     ^^^^^^^^^^^^^^^^^^^
  File "/usr/lib64/python3.11/logging/__init__.py", line 377, in getMessage
    msg = msg % self.args
          ~~~~^~~~~~~~~~~
ValueError: unsupported format character ',' (0x2c) at index 159
Call stack:
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/kehannon/Work/LLMs/vllm/vll311/lib64/python3.11/site-packages/vllm-0.4.1+cpu-py3.11-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 169, in <module>
    uvicorn.run(app,
  File "/home/kehannon/Work/LLMs/vllm/vll311/lib64/python3.11/site-packages/uvicorn/main.py", line 575, in run
    server.run()
  File "/home/kehannon/Work/LLMs/vllm/vll311/lib64/python3.11/site-packages/uvicorn/server.py", line 65, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/lib64/python3.11/asyncio/runners.py", line 190, in run
    return runner.run(main)
  File "/usr/lib64/python3.11/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
  File "/home/kehannon/Work/LLMs/vllm/vll311/lib64/python3.11/site-packages/vllm-0.4.1+cpu-py3.11-linux-x86_64.egg/vllm/entrypoints/openai/api_server.py", line 41, in _force_log
    await engine.do_log_stats()
  File "/home/kehannon/Work/LLMs/vllm/vll311/lib64/python3.11/site-packages/vllm-0.4.1+cpu-py3.11-linux-x86_64.egg/vllm/engine/async_llm_engine.py", line 704, in do_log_stats
    self.engine.do_log_stats()
  File "/home/kehannon/Work/LLMs/vllm/vll311/lib64/python3.11/site-packages/vllm-0.4.1+cpu-py3.11-linux-x86_64.egg/vllm/engine/llm_engine.py", line 601, in do_log_stats
    self.stat_logger.log(self._get_stats(scheduler_outputs=None))
  File "/home/kehannon/Work/LLMs/vllm/vll311/lib64/python3.11/site-packages/vllm-0.4.1+cpu-py3.11-linux-x86_64.egg/vllm/engine/metrics.py", line 229, in log
    logger.info(
Message: 'Avg prompt throughput: %.1f tokens/s, Avg generation throughput: %.1f tokens/s, Running: %d reqs, Swapped: %d reqs, Pending: %d reqs, GPU KV cache usage: %.1f%, CPU KV cache usage: %.1f%'
Arguments: (0.0, 0.0, 0, 0, 0, 0.0, 0.0)
simon-mo commented 5 months ago

https://github.com/vllm-project/vllm/pull/4396 I think this PR is fixing it

andysalerno commented 5 months ago

this issue remains active for me, when building from Dockerfile.cpu on latest commit b31a1fb63c98fa1c64666aaae15579439af60d95

python3 -m vllm.entrypoints.openai.api_server --model microsoft/Phi-3-mini-128k-instruct --trust-remote-code --max-model-len 8000
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi
    result = await app(  # type: ignore[func-returns-value]
  File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 69, in __call__
    return await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
    await super().__call__(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 123, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 186, in __call__
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 164, in __call__
    await self.app(scope, receive, _send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 93, in __call__
    await self.simple_response(scope, receive, send, request_headers=headers)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/cors.py", line 148, in simple_response
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 65, in __call__
    await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 756, in __call__
    await self.middleware_stack(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 776, in app
    await route.handle(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 297, in handle
    await self.app(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 77, in app
    await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 64, in wrapped_app
    raise exc
  File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
    await app(scope, receive, sender)
  File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 72, in app
    response = await func(request)
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 278, in app
    raw_response = await run_endpoint_function(
  File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 191, in run_endpoint_function
    return await dependant.call(**values)
  File "/workspace/vllm/vllm/entrypoints/openai/api_server.py", line 90, in create_chat_completion
    generator = await openai_serving_chat.create_chat_completion(
  File "/workspace/vllm/vllm/entrypoints/openai/serving_chat.py", line 128, in create_chat_completion
    return await self.chat_completion_full_generator(
  File "/workspace/vllm/vllm/entrypoints/openai/serving_chat.py", line 290, in chat_completion_full_generator
    async for res in result_generator:
  File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 663, in generate
    raise e
  File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 657, in generate
    async for request_output in stream:
  File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 77, in __anext__
    raise result
  File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 38, in _raise_exception_on_finish
    task.result()
  File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 498, in run_engine_loop
    has_requests_in_progress = await asyncio.wait_for(
  File "/usr/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
    return fut.result()
  File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 472, in engine_step
    request_outputs = await self.engine.step_async()
  File "/workspace/vllm/vllm/engine/async_llm_engine.py", line 213, in step_async
    output = await self.model_executor.execute_model_async(
  File "/workspace/vllm/vllm/executor/cpu_executor.py", line 114, in execute_model_async
    output = await make_async(self.driver_worker.execute_model)(
  File "/usr/lib/python3.10/concurrent/futures/thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
TypeError: CPUWorker.execute_model() got an unexpected keyword argument 'num_lookahead_slots'
andysalerno commented 5 months ago

My mistake; it's a similar setup but different error

navpreet-np7 commented 5 months ago

My mistake; it's a similar setup but different error

@andysalerno I am still facing it. Did you resolve this issue? I also built it from Dockerfile.cpu