pprp / Pruner-Zero

Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for LLMs
https://arxiv.org/abs/2406.02924
MIT License
69 stars 5 forks source link

Regarding Sparsity of LoRA fine-tuned model. #4

Closed Arnav0400 closed 1 month ago

Arnav0400 commented 3 months ago

Hello @pprp,

The lora fine-tuned models cannot be merged as the lora branch is dense. This makes the LoRA fine-tuning not very useful to regain performance lost during pruning. Please let me know if there is any sparse lora fine-tuning that you are employing.

pprp commented 1 month ago

hi, sorry for the late reply.

For lora fine-tuning, we just employ the same method as Wanda, which is not taking the sparsity into consideration.

For sparse lora-finetuning, you can refer to :