Closed A11en0 closed 6 months ago
Hi, thanks for your question! For pruning LoRA doublet-wise, it means that we mask out all elements of unimportant doublets $\mathcal{G{i}}$ iteratively. The importance of $\mathcal{G{i}}$ is evaluated similiarily as Eq. (7). That is taking average over importance scores of all elements of $\mathcal{G_{i}}$. The budget schedule is exactly same as AdaLoRA. Therefore, it is alike to apply the RankAllocator based on LoRA. Hope these can answer your questions.
Hi, thanks for your wonderful work for PEFT!
I have read your paper and found that it compared a variant prune LoRA in Table 4. However, I can not understand how does it implement. It just involves the "doublet-wise" in this paper, could you please explain it in more detail?