vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.81k stars 4.11k forks source link

[Bug]: Embedding doesn't work with `device="cpu"` #5593

Open TheRoadQaQ opened 3 months ago

TheRoadQaQ commented 3 months ago

Your current environment

The output of `python collect_env.py`
Collecting environment information...
WARNING 06-17 14:57:49 ray_utils.py:46] Failed to import Ray with ModuleNotFoundError("No module named 'ray'"). For multi-node inference, please install Ray with `pip install ray`.
PyTorch version: 2.3.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 12.3.0-1ubuntu1~22.04) 12.3.0
Clang version: Could not collect
CMake version: version 3.29.5
Libc version: glibc-2.35

Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
Is CUDA available: False
CUDA runtime version: 12.4.99

🐛 Describe the bug

code is

from vllm import LLM, SamplingParams

# Sample prompts.
prompts = [
    "Hello, my name is",
    "The president of the United States is",
    "The capital of France is",
    "The future of AI is",
]
# Create a sampling params object.
sampling_params = SamplingParams(temperature=0.8, top_p=0.95)

# Create an LLM.
llm = LLM(model="intfloat/e5-mistral-7b-instruct", device = "cpu", trust_remote_code=True)
# Generate texts from the prompts. The output is a list of RequestOutput objects
# that contain the prompt, generated text, and other information.
outputs = llm.encode(prompts)
# Print the outputs.
for output in outputs:
    prompt = output.prompt
    generated_text = output.outputs[0].text
    print(f"Prompt: {prompt!r}, Generated text: {generated_text!r}")

error is vllm/worker/cpu_worker.py, line 303, in execute_model

zhouyuan commented 3 months ago

@TheRoadQaQ thanks for reporting, indeed the encode() API is not well supported on CPU backend, will try to make a patch for this

thanks, -yuan