yxli2123 / LoftQ

MIT License
193 stars 18 forks source link

SVD Implementation in loftQ Algorithm #3

Closed MarsJacobs closed 9 months ago

MarsJacobs commented 10 months ago

Thanks for sharing great work! Learned a lot. I have a question regarding SVD implementation in loftQ init.

https://github.com/yxli2123/LoftQ/blob/e6bdef42d0fbc297b18c3e09ec59108aa5c723a8/utils.py#L25-L26

Upon reviewing the code, in the SVD decomposition, I noticed the use of a "reduced SVD" option. (full_matrices=False) I wondering the potential impact of choosing a reduced SVD over a full SVD option in the context of loftQ initialization, especially regarding its effect on the alternating optimization process.

Could you share some insights on whether there are specific advantages or reasons for choosing reduced SVD in this scenario?

Thanks in advance.

yxli2123 commented 10 months ago

Thanks for your inserts of our work. Setting either full_matrices=True or full_matrices=False will not affect the results. For square matrices, they are the same. For non-square matrices, reduced SVD is also the same as full SVD but removes the zero singular vectors. In our work, we are actually using truncated SVD, which only obtain the first r singular vectors.

MarsJacobs commented 10 months ago

Thank you for your kind response. I have some additional questions regarding the LoftQ algorithm.

I am struggling to intuitively understand how repeatedly performing quantization and SVD approximation leads to progressively better initialization at adapter weight.

If we rewrite LoftQ Algorithm 1 with an added Error Term, it looks as follows: ($\epsilon$ is a error term)

스크린샷 2023-11-21 오후 8 23 34

As in Equation 3, when we approximate the difference between $W_{FP}$ and the $Q_t + A_tB_t$ using SVD, let's say the SVD approximation error is $\epsilon^{svd}_t$. ​ I have personally measured how this error term ($\epsilon^{svd}_t$) changes across layers with each iteration. The results show that in all layers, this svd error term decreases as the number of iteration steps increases.

In summary, as the steps in the LoftQ algorithm increase, the SVD approximation becomes more accurate, effectively minimizing the main objective, as stated in eq.6 of the paper. However, I am not entirely clear on why this error minimizes through the repetition of these two steps (1)Quantization 2)SVD). Could you please explain this once more?

I conceptually understand how the initialization of quantized weight and adapter weight are jointly optimized but it is not clear to me why this process minimizes $W_{FP} - Q - AB^T$ analytically. (maybe I miss something) I would greatly appreciate additional clarification on this, as it would help me deeply understand the core idea of this excellent paper.

yxli2123 commented 9 months ago

Hi @MarsJacobs, the error decreasing as the step increasing is not guaranteed. This algorithm is heuristic. For some models, like some layers in DeBERTa-v3-base, the error fluctuates as steps increases.