Open hnn123 opened 3 months ago
Currently, vllm does not support DoRA
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
Your current environment
🐛 Describe the bug
deploy qwen2-7B-Instruct with lora,using llamafactory do lora sft adapter_config { "alpha_pattern": {}, "auto_mapping": null, "base_model_name_or_path": "/llmmodel2/qwen/Qwen2-7B-Instruct", "bias": "none", "fan_in_fan_out": false, "inference_mode": true, "init_lora_weights": "pissa_niter_16", "layer_replication": null, "layers_pattern": null, "layers_to_transform": null, "loftq_config": {}, "lora_alpha": 16, "lora_dropout": 0.0, "megatron_config": null, "megatron_core": "megatron.core", "modules_to_save": null, "peft_type": "LORA", "r": 8, "rank_pattern": {}, "revision": null, "target_modules": [ "up_proj", "gate_proj", "v_proj", "down_proj", "q_proj", "k_proj", "o_proj" ], "task_type": "CAUSAL_LM", "use_dora": true, "use_rslora": false }
getting this error:
ERROR:asyncio:Exception in callback functools.partial(<function _log_task_completion at 0x7f15bd79d870>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f15a1782b90>>) handle: <Handle functools.partial(<function _log_task_completion at 0x7f15bd79d870>, error_callback=<bound method AsyncLLMEngine._error_callback of <vllm.engine.async_llm_engine.AsyncLLMEngine object at 0x7f15a1782b90>>)> Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/vllm/lora/worker_manager.py", line 175, in _load_lora lora = self._lora_model_cls.from_local_checkpoint( File "/usr/local/lib/python3.10/dist-packages/vllm/lora/models.py", line 318, in from_local_checkpoint modulename, = parse_fine_tuned_lora_name(lora_module) File "/usr/local/lib/python3.10/dist-packages/vllm/lora/utils.py", line 107, in parse_fine_tuned_lora_name raise ValueError(f"{name} is unsupported LoRA weight") ValueError: base_model.model.model.layers.0.mlp.down_proj.lora_magnitude_vector is unsupported LoRA weight