Closed Dentoty closed 3 months ago
AdaLoRA can be regarded as a training plug-in for LoRA fine-tuning. As long as models have been revised by LoRA, their fine-tuning can be further extended by AdaLoRA to allocate the parameter budget along the training process. The questoin here is to control the budget scheduler. In the case of very few training examples, like dreamooth, we suggest to set the total steps of final fine-tuning similar as previous training steps to have $\Delta W$ well trained after the budget allocation.
Could this be used to do lora dreambooth training?