QingruZhang / AdaLoRA

AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning (ICLR 2023).
MIT License
231 stars 23 forks source link

Lora Dreambooth? #8

Closed Dentoty closed 3 months ago

Dentoty commented 11 months ago

Could this be used to do lora dreambooth training?

QingruZhang commented 11 months ago

AdaLoRA can be regarded as a training plug-in for LoRA fine-tuning. As long as models have been revised by LoRA, their fine-tuning can be further extended by AdaLoRA to allocate the parameter budget along the training process. The questoin here is to control the budget scheduler. In the case of very few training examples, like dreamooth, we suggest to set the total steps of final fine-tuning similar as previous training steps to have $\Delta W$ well trained after the budget allocation.