Open gaojuntian opened 4 days ago
This feature will be included in the upcoming release, see: https://github.com/vllm-project/vllm/pull/9467 and https://github.com/vllm-project/vllm/pull/9574. You can consider manually build the main branch to address your issue
Your current environment
""" This example shows how to use LoRA with different quantization techniques for offline inference.
Requires HuggingFace credentials for access. """
import gc from typing import List, Optional, Tuple
import torch from huggingface_hub import snapshot_download
from vllm import EngineArgs, LLMEngine, RequestOutput, SamplingParams from vllm.lora.request import LoRARequest
def create_test_prompts( lora_path: str ) -> List[Tuple[str, SamplingParams, Optional[LoRARequest]]]: return [
this is an example of using quantization without LoRA
def process_requests(engine: LLMEngine, test_prompts: List[Tuple[str, SamplingParams, Optional[LoRARequest]]]): """Continuously process a list of prompts and handle the outputs.""" request_id = 0
def initialize_engine(model: str, quantization: str, lora_repo: Optional[str]) -> LLMEngine: """Initialize the LLMEngine."""
def main(): """Main function that sets up and runs the prompt processing."""
if name == 'main': main()
Model Input Dumps
No response
🐛 Describe the bug
[Bug]: 我在使用factory_llama工具以qlora的方式训练Qwen/Qwen2.5-1.5B-Instruct模型,然后以vllm加载lora的方式启动,结果报错:AttributeError: Model Qwen2ForCausalLM does not support BitsAndBytes quantization yet.,有大佬知道是哪儿的问题吗
Before submitting a new issue...