vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
22.57k stars 3.18k forks source link

[Bug]: RuntimeError: No suitable kernel. h_in=16 h_out=55552 dtype=Float out_dtype=BFloat16 #4640

Open AJAXLONG opened 2 months ago

AJAXLONG commented 2 months ago

Your current environment

 v0.4.1  ,chinese-alpaca-llama2-7b  ,   multilora

🐛 Describe the bug

有没有不重新编译的方法可以解决该问题

jeejeelee commented 2 months ago

必须要重新编译