vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
27.83k stars 4.11k forks source link

[Bug]: RuntimeError: No suitable kernel. h_in=16 h_out=7392 dtype=Float out_dtype=BFloat16 #6126

Closed JJJJerry closed 2 months ago

JJJJerry commented 3 months ago

Your current environment

The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: CentOS Linux 7 (Core) (x86_64)
GCC version: (GCC) 9.5.0
Clang version: Could not collect
CMake version: version 3.29.5
Libc version: glibc-2.17

Python version: 3.11.9 (main, Apr 19 2024, 16:48:06) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-3.10.0-1160.102.1.el7.x86_64-x86_64-with-glibc2.17
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA A800 80GB PCIe
GPU 1: NVIDIA A800 80GB PCIe
GPU 2: NVIDIA A800 80GB PCIe
GPU 3: NVIDIA A800 80GB PCIe
GPU 4: NVIDIA A800 80GB PCIe
GPU 5: NVIDIA A800 80GB PCIe
GPU 6: NVIDIA A800 80GB PCIe
GPU 7: NVIDIA A800 80GB PCIe

Nvidia driver version: 535.54.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                128
On-line CPU(s) list:   0-127
Thread(s) per core:    2
Core(s) per socket:    32
座:                 2
NUMA 节点:         2
厂商 ID:           GenuineIntel
CPU 系列:          6
型号:              106
型号名称:        Intel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz
步进:              6
CPU MHz:             2500.000
CPU max MHz:           2601.0000
CPU min MHz:           800.0000
BogoMIPS:            5200.00
虚拟化:           VT-x
L1d 缓存:          48K
L1i 缓存:          32K
L2 缓存:           1280K
L3 缓存:           49152K
NUMA 节点0 CPU:    0-31,64-95
NUMA 节点1 CPU:    32-63,96-127
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 invpcid_single ssbd mba rsb_ctxsw ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq md_clear pconfig spec_ctrl intel_stibp flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] torch==2.3.0
[pip3] torchvision==0.18.0
[pip3] transformers==4.42.3
[pip3] triton==2.3.0
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] torch                     2.3.0                    pypi_0    pypi
[conda] torchvision               0.18.0                   pypi_0    pypi
[conda] transformers              4.42.3                   pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.5.0.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV8     PXB     PXB     SYS     SYS     SYS     SYS     NODE    NODE    0-31,64-95      0               N/A
GPU1    NV8      X      PXB     PXB     SYS     SYS     SYS     SYS     NODE    NODE    0-31,64-95      0               N/A
GPU2    PXB     PXB      X      NV8     SYS     SYS     SYS     SYS     NODE    NODE    0-31,64-95      0               N/A
GPU3    PXB     PXB     NV8      X      SYS     SYS     SYS     SYS     NODE    NODE    0-31,64-95      0               N/A
GPU4    SYS     SYS     SYS     SYS      X      NV8     PXB     PXB     SYS     SYS     32-63,96-127    1               N/A
GPU5    SYS     SYS     SYS     SYS     NV8      X      PXB     PXB     SYS     SYS     32-63,96-127    1               N/A
GPU6    SYS     SYS     SYS     SYS     PXB     PXB      X      NV8     SYS     SYS     32-63,96-127    1               N/A
GPU7    SYS     SYS     SYS     SYS     PXB     PXB     NV8      X      SYS     SYS     32-63,96-127    1               N/A
NIC0    NODE    NODE    NODE    NODE    SYS     SYS     SYS     SYS      X      PIX
NIC1    NODE    NODE    NODE    NODE    SYS     SYS     SYS     SYS     PIX      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1

🐛 Describe the bug

CUDA_VISIBLE_DEVICES=4,5,6,7 python vllm_qwen2_lora.py

# vllm_qwen2_lora.py
from vllm import LLM, SamplingParams
from vllm.lora.request import LoRARequest
from transformers import AutoTokenizer
if __name__=='__main__':
    sampling_params = SamplingParams(temperature=0.8, top_p=0.95,max_tokens=512)
    model_path='./Qwen/Qwen2-72B-Instruct'
    tokenizer=AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_path)
    llm = LLM(model=model_path,gpu_memory_utilization=0.7,enable_lora=True,trust_remote_code=True,tensor_parallel_size=4,max_model_len=512,enforce_eager=True)
    lora_path='./train_result/qwen2_lora'
    prompts=['你好','你是谁?','你叫什么名字?']

    format_prompts=[]
    for prompt in prompts:
        messages = [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": prompt}]
        text=tokenizer.apply_chat_template(messages,
                                        tokenize=False,
                                        add_generation_prompt=True)
        format_prompts.append(text)

    outputs = llm.generate(
        format_prompts,
        sampling_params,
        lora_request=LoRARequest("lora_adapter", 1, lora_path)
    )
    for i,output in enumerate(outputs):
        generated_text = output.outputs[0].text
        print(generated_text)
JJJJerry commented 3 months ago

I build vllm from source, pre-release vtest. And I export VLLM_INSTALL_PUNICA_KERNELS=1. Additionally, I can run this with 2gpus (set tensor_parallel_size=2). However, when I run it with 4gpus, it failed.

256785 commented 3 months ago

same question with you

jeejeelee commented 3 months ago

Hi, #5036 should be able to address your issue. You can clone the corresponding branch to test it.

JJJJerry commented 3 months ago

Thanks, the branch "refactor-punica-kernel" works well.

JJJJerry commented 3 months ago

Hi, #5036 should be able to address your issue. You can clone the corresponding branch to test it.

But there is a bug, when I run the above py the second time, it will cause an error :

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
INFO 07-04 20:27:56 config.py:703] Defaulting to use mp for distributed inference
INFO 07-04 20:27:56 llm_engine.py:169] Initializing an LLM engine (v0.5.0.post1) with config: model='/data03/irlab_share/Qwen/Qwen2-72B-Instruct', speculative_config=None, tokenizer='/data03/irlab_share/Qwen/Qwen2-72B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=/data03/irlab_share/Qwen/Qwen2-72B-Instruct, use_v2_block_manager=False, enable_prefix_caching=False)
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
(VllmWorkerProcess pid=105102) INFO 07-04 20:27:59 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
(VllmWorkerProcess pid=105103) INFO 07-04 20:27:59 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
(VllmWorkerProcess pid=105101) INFO 07-04 20:27:59 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
(VllmWorkerProcess pid=105103) INFO 07-04 20:28:00 utils.py:720] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=105103) INFO 07-04 20:28:00 pynccl.py:63] vLLM is using nccl==2.20.5
INFO 07-04 20:28:00 utils.py:720] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=105101) INFO 07-04 20:28:00 utils.py:720] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=105102) INFO 07-04 20:28:00 utils.py:720] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=105101) INFO 07-04 20:28:00 pynccl.py:63] vLLM is using nccl==2.20.5
INFO 07-04 20:28:00 pynccl.py:63] vLLM is using nccl==2.20.5
(VllmWorkerProcess pid=105102) INFO 07-04 20:28:00 pynccl.py:63] vLLM is using nccl==2.20.5
WARNING 07-04 20:28:02 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=105101) WARNING 07-04 20:28:02 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=105102) WARNING 07-04 20:28:02 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=105103) WARNING 07-04 20:28:02 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=105103) INFO 07-04 20:28:18 model_runner.py:254] Loading model weights took 33.9833 GB
(VllmWorkerProcess pid=105102) INFO 07-04 20:28:18 model_runner.py:254] Loading model weights took 33.9833 GB
(VllmWorkerProcess pid=105101) INFO 07-04 20:28:19 model_runner.py:254] Loading model weights took 33.9833 GB
INFO 07-04 20:28:19 model_runner.py:254] Loading model weights took 33.9833 GB
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks: [Errno 2] No such file or directory: '/data/wangchunhao-slurm/.triton/cache/098aa9b899bc0244743654c666c2e82a/_sgmv_shrink_kernel.json.tmp.pid_104939_42450', Traceback (most recent call last):
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/worker/worker.py", line 175, in determine_num_available_blocks
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     self.model_runner.profile_run()
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/worker/model_runner.py", line 849, in profile_run
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     self.execute_model(model_input, kv_caches, intermediate_tensors)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/worker/model_runner.py", line 1215, in execute_model
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     hidden_or_intermediate_states = model_executable(
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/model_executor/models/qwen2.py", line 336, in forward
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     hidden_states = self.model(input_ids, positions, kv_caches,
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/model_executor/models/qwen2.py", line 257, in forward
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     hidden_states, residual = layer(
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/model_executor/models/qwen2.py", line 209, in forward
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     hidden_states = self.self_attn(
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/model_executor/models/qwen2.py", line 153, in forward
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     qkv, _ = self.qkv_proj(hidden_states)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/layers.py", line 511, in forward
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     output_parallel = self.apply(input_, bias)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/layers.py", line 918, in apply
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     _apply_lora_packed_nslice(
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/layers.py", line 134, in _apply_lora_packed_nslice
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     add_lora(output,
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/punica.py", line 231, in add_lora
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     add_shrink(
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/punica.py", line 84, in add_shrink
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     sgmv_shrink(
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/ops/sgmv_shrink.py", line 162, in sgmv_shrink
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     _sgmv_shrink_kernel[grid](
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda>
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     self.cache[device][key] = compile(
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/compiler/compiler.py", line 202, in compile
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return CompiledKernel(so_path, metadata_group.get(metadata_filename))
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/compiler/compiler.py", line 230, in __init__
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     self.asm = {
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/compiler/compiler.py", line 231, in <dictcomp>
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     file.suffix[1:]: file.read_bytes() if file.suffix[1:] == driver.binary_ext else file.read_text()
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/pathlib.py", line 1134, in read_text
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     with self.open(mode='r', encoding=encoding, errors=errors) as f:
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]   File "/data/wangchunhao-slurm/workspace/anaconda/envs/llama_factory/lib/python3.10/pathlib.py", line 1119, in open
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226]     return self._accessor.open(self, mode, buffering, encoding, errors,
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226] FileNotFoundError: [Errno 2] No such file or directory: '/data/wangchunhao-slurm/.triton/cache/098aa9b899bc0244743654c666c2e82a/_sgmv_shrink_kernel.json.tmp.pid_104939_42450'
(VllmWorkerProcess pid=105101) ERROR 07-04 20:28:41 multiproc_worker_utils.py:226] 

I have to delete all the cache manully to run it again. And sometimes there is a decode error.

INFO 07-04 20:07:09 config.py:703] Defaulting to use mp for distributed inference
INFO 07-04 20:07:09 llm_engine.py:169] Initializing an LLM engine (v0.5.0.post1) with config: model='/data03/xxx_share/Qwen/Qwen2-72B-Instruct', speculative_config=None, tokenizer='/data03/xxx_share/Qwen/Qwen2-72B-Instruct', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=4, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None), seed=0, served_model_name=/data03/xxx_share/Qwen/Qwen2-72B-Instruct, use_v2_block_manager=False, enable_prefix_caching=False)
(VllmWorkerProcess pid=87017) INFO 07-04 20:07:12 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
(VllmWorkerProcess pid=87018) INFO 07-04 20:07:12 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
(VllmWorkerProcess pid=87016) INFO 07-04 20:07:12 multiproc_worker_utils.py:215] Worker ready; awaiting tasks
(VllmWorkerProcess pid=87016) INFO 07-04 20:07:13 utils.py:720] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=87016) INFO 07-04 20:07:13 pynccl.py:63] vLLM is using nccl==2.20.5
(VllmWorkerProcess pid=87017) INFO 07-04 20:07:13 utils.py:720] Found nccl from library libnccl.so.2
INFO 07-04 20:07:13 utils.py:720] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=87017) INFO 07-04 20:07:13 pynccl.py:63] vLLM is using nccl==2.20.5
INFO 07-04 20:07:13 pynccl.py:63] vLLM is using nccl==2.20.5
(VllmWorkerProcess pid=87018) INFO 07-04 20:07:13 utils.py:720] Found nccl from library libnccl.so.2
(VllmWorkerProcess pid=87018) INFO 07-04 20:07:13 pynccl.py:63] vLLM is using nccl==2.20.5
WARNING 07-04 20:07:15 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=87016) WARNING 07-04 20:07:15 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=87017) WARNING 07-04 20:07:15 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
(VllmWorkerProcess pid=87018) WARNING 07-04 20:07:15 custom_all_reduce.py:118] Custom allreduce is disabled because it's not supported on more than two PCIe-only GPUs. To silence this warning, specify disable_custom_all_reduce=True explicitly.
INFO 07-04 20:07:32 model_runner.py:254] Loading model weights took 33.9833 GB
(VllmWorkerProcess pid=87016) INFO 07-04 20:07:32 model_runner.py:254] Loading model weights took 33.9833 GB
(VllmWorkerProcess pid=87017) INFO 07-04 20:07:33 model_runner.py:254] Loading model weights took 33.9833 GB
(VllmWorkerProcess pid=87018) INFO 07-04 20:07:33 model_runner.py:254] Loading model weights took 33.9833 GB
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method determine_num_available_blocks: 'utf-8' codec can't decode byte 0xbe in position 18: invalid start byte, Traceback (most recent call last):
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/worker/worker.py", line 175, in determine_num_available_blocks
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     self.model_runner.profile_run()
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/worker/model_runner.py", line 849, in profile_run
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     self.execute_model(model_input, kv_caches, intermediate_tensors)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/worker/model_runner.py", line 1215, in execute_model
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     hidden_or_intermediate_states = model_executable(
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/model_executor/models/qwen2.py", line 336, in forward
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     hidden_states = self.model(input_ids, positions, kv_caches,
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/model_executor/models/qwen2.py", line 257, in forward
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     hidden_states, residual = layer(
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/model_executor/models/qwen2.py", line 209, in forward
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     hidden_states = self.self_attn(
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/model_executor/models/qwen2.py", line 153, in forward
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     qkv, _ = self.qkv_proj(hidden_states)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return self._call_impl(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return forward_call(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/layers.py", line 511, in forward
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     output_parallel = self.apply(input_, bias)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/layers.py", line 918, in apply
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     _apply_lora_packed_nslice(
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/layers.py", line 134, in _apply_lora_packed_nslice
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     add_lora(output,
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/punica.py", line 253, in add_lora
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     add_expand_slice(
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/punica.py", line 159, in add_expand_slice
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     sgmv_expand_slice(
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data05/project/LLaMA-Factory-20240624/vllm-refactor-punica-kernel/vllm/lora/ops/sgmv_expand_slice.py", line 178, in sgmv_expand_slice
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     _sgmv_expand_slice_kernel[grid](
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/runtime/jit.py", line 167, in <lambda>
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return lambda *args, **kwargs: self.run(grid=grid, warmup=False, *args, **kwargs)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/runtime/jit.py", line 416, in run
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     self.cache[device][key] = compile(
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/compiler/compiler.py", line 202, in compile
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return CompiledKernel(so_path, metadata_group.get(metadata_filename))
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/compiler/compiler.py", line 230, in __init__
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     self.asm = {
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/site-packages/triton/compiler/compiler.py", line 231, in <dictcomp>
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     file.suffix[1:]: file.read_bytes() if file.suffix[1:] == driver.binary_ext else file.read_text()
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/pathlib.py", line 1135, in read_text
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     return f.read()
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]   File "/data/xxx/workspace/anaconda/envs/llama_factory/lib/python3.10/codecs.py", line 322, in decode
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226]     (result, consumed) = self._buffer_decode(data, self.errors, final)
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226] UnicodeDecodeError: 'utf-8' codec can't decode byte 0xbe in position 18: invalid start byte
(VllmWorkerProcess pid=87016) ERROR 07-04 20:07:55 multiproc_worker_utils.py:226] 
JJJJerry commented 3 months ago

When I run it with vllm's api server: python -m vllm.entrypoints.openai.api_server --... , the lora adapter seems no effect. The model seems like origion.But lora adapter works well when I run the above py.

jeejeelee commented 3 months ago

This is a triton bug, refer to :https://github.com/vllm-project/vllm/issues/6103

Currently, you can temporarily avoid this error by setting distributed_executor_backend to ray. For example:

    llm = vllm.LLM(
        MODEL_PATH,
        enable_lora=True,
        max_num_seqs=16,
        max_loras=2,
        trust_remote_code=True,
        gpu_memory_utilization=0.3,
        tensor_parallel_size=4,
        distributed_executor_backend="ray"
    )
jeejeelee commented 3 months ago

When I run it with vllm's api server: python -m vllm.entrypoints.openai.api_server --... , the lora adapter seems no effect. The model seems like origion.But lora adapter works well when I run the above py.

I will check into this issue tomorrow

JJJJerry commented 3 months ago

This is a triton bug, refer to :#6103

Currently, you can temporarily avoid this error by setting distributed_executor_backend to ray. For example:

    llm = vllm.LLM(
        MODEL_PATH,
        enable_lora=True,
        max_num_seqs=16,
        max_loras=2,
        trust_remote_code=True,
        gpu_memory_utilization=0.3,
        tensor_parallel_size=4,
        distributed_executor_backend="ray"
    )

set distributed_executor_backend="ray" This works for me.

JJJJerry commented 3 months ago

When I run it with vllm's api server: python -m vllm.entrypoints.openai.api_server --... , the lora adapter seems no effect. The model seems like origion.But lora adapter works well when I run the above py.

I will check into this issue tomorrow

I run api server with llama-factory, and the adapter works well.

mgoin commented 2 months ago

This should be resolved with the new landed Triton kernels https://github.com/vllm-project/vllm/pull/5036