Open Shmtu-Herven opened 1 year ago
@Shmtu-Herven Developing! But you can use gen weight script to get diffusers lora weight file and just use that file to resume the lora training and train it on the reference image for few dozens steps
Hi, @KohakuBlueleaf is there any progress about 4 section?
@Subuday No sry I am quite busy recently Have no time to deal with section 4
And I also need to improve section 1...
Are you open to doing contract work/accepting a commission for work on section 4?
I have finished the Rank-Relaxed Fast Finetuning and the generated face images are as expected.
The Rank-Relaxed Fast Finetuning can be executed by merely adjusting the LoRALinearLayer similar to lilora. It comprises two LoRA structures: a frozen LoRA(r=1), initialized with the weights predicted by the Hypernet, and a trainable LoRA(r>1) with zero initialization. The frozen and trainable LoRA can to be merged as a standard LoRA using SVD decomposition, which is then restored for fast finetuning.
Here are some results from my experiments:
Thank you for your work. What are the detailed steps for the fourth step?