vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.89k stars 4.11k forks source link

[Bug]: vLLM embeddings example code doesn't work #5111

Closed orionw closed 4 months ago

orionw commented 4 months ago

Your current environment


Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Debian GNU/Linux 10 (buster) (x86_64)
GCC version: (Debian 8.3.0-6) 8.3.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.28

Python version: 3.10.14 | packaged by conda-forge | (main, Mar 20 2024, 12:45:18) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-4.19.0-26-cloud-amd64-x86_64-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.3.109
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L4
Nvidia driver version: 550.54.15
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:        x86_64
CPU op-mode(s):      32-bit, 64-bit
Byte Order:          Little Endian
Address sizes:       46 bits physical, 48 bits virtual
CPU(s):              12
On-line CPU(s) list: 0-11
Thread(s) per core:  2
Core(s) per socket:  6
Socket(s):           1
NUMA node(s):        1
Vendor ID:           GenuineIntel
CPU family:          6
Model:               85
Model name:          Intel(R) Xeon(R) CPU @ 2.20GHz
Stepping:            7
CPU MHz:             2200.182
BogoMIPS:            4400.36
Hypervisor vendor:   KVM
Virtualization type: full
L1d cache:           32K
L1i cache:           32K
L2 cache:            1024K
L3 cache:            39424K
NUMA node0 CPU(s):   0-11
Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves arat avx512_vnni md_clear arch_capabilities

Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnxruntime==1.18.0
[pip3] torch==2.3.0
[pip3] triton==2.3.0
[pip3] vllm_nccl_cu12==2.18.1.0.4.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.4.0             pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      0-11    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

🐛 Describe the bug

I ran the vLLM example code in https://github.com/vllm-project/vllm/blob/main/examples/offline_inference_embedding.py with the latest version.

I got the following error

/opt/conda/envs/instruct/lib/python3.10/site-packages/huggingface_hub/file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
INFO 05-29 21:05:57 llm_engine.py:100] Initializing an LLM engine (v0.4.2) with config: model='intfloat/e5-mistral-7b-instruct', speculative_config=None, tokenizer='intfloat/e5-mistral-7b-instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=intfloat/e5-mistral-7b-instruct)
INFO 05-29 21:05:57 utils.py:660] Found nccl from library /home/orionweller/.config/vllm/nccl/cu12/libnccl.so.2.18.1
INFO 05-29 21:05:59 selector.py:81] Cannot use FlashAttention-2 backend because the flash_attn package is not found. Please install it for better performance.
INFO 05-29 21:05:59 selector.py:32] Using XFormers backend.
[rank0]: Traceback (most recent call last):
[rank0]:   File "/home/orionweller/retrieval-w-instructions/src/instruction_retrieval/generation/vllm_embed_example.py", line 12, in <module>
[rank0]:     model = LLM(model="intfloat/e5-mistral-7b-instruct", enforce_eager=True)
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 123, in __init__
[rank0]:     self.llm_engine = LLMEngine.from_engine_args(
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 292, in from_engine_args
[rank0]:     engine = cls(
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 160, in __init__
[rank0]:     self.model_executor = executor_class(
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/executor/executor_base.py", line 41, in __init__
[rank0]:     self._init_executor()
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 23, in _init_executor
[rank0]:     self._init_non_spec_worker()
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 69, in _init_non_spec_worker
[rank0]:     self.driver_worker.load_model()
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/worker/worker.py", line 118, in load_model
[rank0]:     self.model_runner.load_model()
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/worker/model_runner.py", line 164, in load_model
[rank0]:     self.model = get_model(
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/model_executor/model_loader/__init__.py", line 19, in get_model
[rank0]:     return loader.load_model(model_config=model_config,
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 222, in load_model
[rank0]:     model = _initialize_model(model_config, self.load_config,
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/model_executor/model_loader/loader.py", line 85, in _initialize_model
[rank0]:     model_class = get_model_architecture(model_config)[0]
[rank0]:   File "/opt/conda/envs/instruct/lib/python3.10/site-packages/vllm/model_executor/model_loader/utils.py", line 35, in get_model_architecture
[rank0]:     raise ValueError(
[rank0]: ValueError: Model architectures ['MistralModel'] are not supported for now. Supported architectures: ['AquilaModel', 'AquilaForCausalLM', 'BaiChuanForCausalLM', 'BaichuanForCausalLM', 'BloomForCausalLM', 'ChatGLMModel', 'ChatGLMForConditionalGeneration', 'CohereForCausalLM', 'DbrxForCausalLM', 'DeciLMForCausalLM', 'DeepseekForCausalLM', 'FalconForCausalLM', 'GemmaForCausalLM', 'GPT2LMHeadModel', 'GPTBigCodeForCausalLM', 'GPTJForCausalLM', 'GPTNeoXForCausalLM', 'InternLMForCausalLM', 'InternLM2ForCausalLM', 'JAISLMHeadModel', 'LlamaForCausalLM', 'LlavaForConditionalGeneration', 'LLaMAForCausalLM', 'MistralForCausalLM', 'MixtralForCausalLM', 'QuantMixtralForCausalLM', 'MptForCausalLM', 'MPTForCausalLM', 'MiniCPMForCausalLM', 'OlmoForCausalLM', 'OPTForCausalLM', 'OrionForCausalLM', 'PhiForCausalLM', 'Phi3ForCausalLM', 'QWenLMHeadModel', 'Qwen2ForCausalLM', 'Qwen2MoeForCausalLM', 'RWForCausalLM', 'StableLMEpochForCausalLM', 'StableLmForCausalLM', 'Starcoder2ForCausalLM', 'XverseForCausalLM']
orionw commented 4 months ago

@CatherineSue, did this happen to you when developing? Great work on this btw, very excited about embeddings in vLLM!

orionw commented 4 months ago

Ah from #4908 I see I need to install from source. Sorry about that!