vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.72k stars 3.91k forks source link

[Bug]: Can't find safetensors model weights #7953

Open felixduner opened 2 weeks ago

felixduner commented 2 weeks ago

Your current environment

The output of `python collect_env.py` ```text Collecting environment information... PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.30.2 Libc version: glibc-2.35 Python version: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.5.0-15-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA H100 PCIe Nvidia driver version: 535.154.05 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Address sizes: 52 bits physical, 57 bits virtual Byte Order: Little Endian CPU(s): 28 On-line CPU(s) list: 0-27 Vendor ID: AuthenticAMD Model name: AMD EPYC 9554 64-Core Processor CPU family: 25 Model: 17 Thread(s) per core: 1 Core(s) per socket: 14 Socket(s): 2 Stepping: 1 BogoMIPS: 6190.70 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm flush_l1d arch_capabilities Virtualization: AMD-V Hypervisor vendor: KVM Virtualization type: full L1d cache: 1.8 MiB (28 instances) L1i cache: 1.8 MiB (28 instances) L2 cache: 14 MiB (28 instances) L3 cache: 448 MiB (28 instances) NUMA node(s): 2 NUMA node0 CPU(s): 0-13 NUMA node1 CPU(s): 14-27 Vulnerability Gather data sampling: Not affected Vulnerability Itlb multihit: Not affected Vulnerability L1tf: Not affected Vulnerability Mds: Not affected Vulnerability Meltdown: Not affected Vulnerability Mmio stale data: Not affected Vulnerability Retbleed: Not affected Vulnerability Spec rstack overflow: Mitigation; safe RET Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, STIBP disabled, RSB filling, PBRSB-eIBRS Not affected Vulnerability Srbds: Not affected Vulnerability Tsx async abort: Not affected Versions of relevant libraries: [pip3] numpy==2.0.2 [pip3] nvidia-cublas-cu12==12.1.3.1 [pip3] nvidia-cuda-cupti-cu12==12.1.105 [pip3] nvidia-cuda-nvrtc-cu12==12.1.105 [pip3] nvidia-cuda-runtime-cu12==12.1.105 [pip3] nvidia-cudnn-cu12==8.9.2.26 [pip3] nvidia-cufft-cu12==11.0.2.54 [pip3] nvidia-curand-cu12==10.3.2.106 [pip3] nvidia-cusolver-cu12==11.4.5.107 [pip3] nvidia-cusparse-cu12==12.1.0.106 [pip3] nvidia-ml-py==12.560.30 [pip3] nvidia-nccl-cu12==2.20.5 [pip3] nvidia-nvjitlink-cu12==12.6.20 [pip3] nvidia-nvtx-cu12==12.1.105 [pip3] torch==2.3.0 [pip3] transformers==4.44.2 [pip3] triton==2.3.0 [conda] Could not collect ROCM Version: Could not collect Neuron SDK Version: N/A vLLM Version: N/A vLLM Build Flags: CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled GPU Topology: GPU0 CPU Affinity NUMA Affinity GPU NUMA ID GPU0 X 0-27 0-1 N/A Legend: X = Self SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI) NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU) PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge) PIX = Connection traversing at most a single PCIe bridge ```

🐛 Describe the bug

When loading a model with weights in safetensors format, no files are found despite being there which causes an error.

Example code:

llm = LLM(model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", load_format="safetensors")

Error: RuntimeError: Cannot find any model weights with...

I located the problem to the method _prepare_weights in vllm/vllm/model_executor/model_loader/loader.py in this section:

if use_safetensors:
    # For models like Mistral-7B-Instruct-v0.3
    # there are both sharded safetensors files and a consolidated
    # safetensors file. Using both breaks.
    # Here, we download the `model.safetensors.index.json` and filter
    # any files not found in the index.
    if not is_local:
        download_safetensors_index_file_from_hf(
            model_name_or_path, self.load_config.download_dir,
            revision)
    hf_weights_files = filter_duplicate_safetensors_files(
        hf_weights_files, hf_folder)
    else:
    hf_weights_files = filter_files_not_needed_for_inference(
        hf_weights_files)

    if len(hf_weights_files) == 0:
    raise RuntimeError(
        f"Cannot find any model weights with `{model_name_or_path}`")

I checked the length of hf_weight_files before this block (1) and after (0), so I'm not sure if it's a bug or I'm missing something else.

Either way, since this block wasn't relevant for my type of model I removed it and it worked.

Before submitting a new issue...

mgoin commented 2 weeks ago

Hi @felixduner can you try updating your version of vLLM? Your test seems to work fine on 0.5.5

>>> from vllm import LLM
>>> llm = LLM(model="TinyLlama/TinyLlama-1.1B-Chat-v1.0", load_format="safetensors")
INFO 08-28 15:53:04 llm_engine.py:210] Initializing an LLM engine (v0.5.5) with config: model='TinyLlama/TinyLlama-1.1B-Chat-v1.0', speculative_config=None, tokenizer='TinyLlama/TinyLlama-1.1B-Chat-v1.0', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=2048, download_dir=None, load_format=LoadFormat.SAFETENSORS, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=TinyLlama/TinyLlama-1.1B-Chat-v1.0, use_v2_block_manager=False, num_scheduler_steps=1, enable_prefix_caching=False, use_async_output_proc=True)

INFO 08-28 15:53:05 model_runner.py:906] Starting to load model TinyLlama/TinyLlama-1.1B-Chat-v1.0...
INFO 08-28 15:53:05 weight_utils.py:236] Using model weights format ['*.safetensors']
INFO 08-28 15:53:05 weight_utils.py:280] No model.safetensors.index.json found in remote.
Loading safetensors checkpoint shards:   0% Completed | 0/1 [00:00<?, ?it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  3.57it/s]
Loading safetensors checkpoint shards: 100% Completed | 1/1 [00:00<00:00,  3.56it/s]

INFO 08-28 15:53:05 model_runner.py:917] Loading model weights took 2.0512 GB
INFO 08-28 15:53:06 gpu_executor.py:121] # GPU blocks: 204520, # CPU blocks: 11915
INFO 08-28 15:53:07 model_runner.py:1212] Capturing the model for CUDA graphs. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI.
INFO 08-28 15:53:07 model_runner.py:1216] CUDA graphs can take additional 1~3 GiB memory per GPU. If you are running out of memory, consider decreasing `gpu_memory_utilization` or enforcing eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage.
INFO 08-28 15:53:13 model_runner.py:1331] Graph capturing finished in 6 secs.