QingruZhang / AdaLoRA

AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).
MIT License
259 stars 28 forks source link

How to implement prune LoRA? #6

Closed A11en0 closed 6 months ago

A11en0 commented 1 year ago

Hi, thanks for your wonderful work for PEFT!

I have read your paper and found that it compared a variant prune LoRA in Table 4. However, I can not understand how does it implement. It just involves the "doublet-wise" in this paper, could you please explain it in more detail?

QingruZhang commented 1 year ago

Hi, thanks for your question! For pruning LoRA doublet-wise, it means that we mask out all elements of unimportant doublets $\mathcal{G{i}}$ iteratively. The importance of $\mathcal{G{i}}$ is evaluated similiarily as Eq. (7). That is taking average over importance scores of all elements of $\mathcal{G_{i}}$. The budget schedule is exactly same as AdaLoRA. Therefore, it is alike to apply the RankAllocator based on LoRA. Hope these can answer your questions.