QingruZhang / AdaLoRA

AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).
MIT License
231 stars 23 forks source link

How do you find the optimal parameters? #9

Open A11en0 opened 11 months ago

A11en0 commented 11 months ago

Hi, thanks for your great work. I notice that Adalora adopts a different pre-train model from LoRA, the hyper-parameters must need to research, and how do you find the optimal parameters for the new model?

macabdul9 commented 10 months ago

Any update on this? @A11en0

A11en0 commented 10 months ago

Nope, but I guess they choose the initialized parameters from the paper of DeBerta-v3.