Closed Arnav0400 closed 3 months ago
Hello @pprp,
The lora fine-tuned models cannot be merged as the lora branch is dense. This makes the LoRA fine-tuning not very useful to regain performance lost during pruning. Please let me know if there is any sparse lora fine-tuning that you are employing.
hi, sorry for the late reply.
For lora fine-tuning, we just employ the same method as Wanda, which is not taking the sparsity into consideration.
For sparse lora-finetuning, you can refer to :
Hello @pprp,
The lora fine-tuned models cannot be merged as the lora branch is dense. This makes the LoRA fine-tuning not very useful to regain performance lost during pruning. Please let me know if there is any sparse lora fine-tuning that you are employing.