KohakuBlueleaf / HyperKohaku

A diffusers based implementation of HyperDreamBooth
Apache License 2.0
125 stars 7 forks source link

Section 4: further finetuning #3

Open Shmtu-Herven opened 1 year ago

Shmtu-Herven commented 1 year ago

Thank you for your work. What are the detailed steps for the fourth step?

KohakuBlueleaf commented 1 year ago

@Shmtu-Herven Developing! But you can use gen weight script to get diffusers lora weight file and just use that file to resume the lora training and train it on the reference image for few dozens steps

Subuday commented 1 year ago

Hi, @KohakuBlueleaf is there any progress about 4 section?

KohakuBlueleaf commented 1 year ago

@Subuday No sry I am quite busy recently Have no time to deal with section 4

And I also need to improve section 1...

killah-t-cell commented 1 year ago

Are you open to doing contract work/accepting a commission for work on section 4?

chenxinhua commented 10 months ago

I have finished the Rank-Relaxed Fast Finetuning and the generated face images are as expected.

The Rank-Relaxed Fast Finetuning can be executed by merely adjusting the LoRALinearLayer similar to lilora. It comprises two LoRA structures: a frozen LoRA(r=1), initialized with the weights predicted by the Hypernet, and a trainable LoRA(r>1) with zero initialization. The frozen and trainable LoRA can to be merged as a standard LoRA using SVD decomposition, which is then restored for fast finetuning.

Here are some results from my experiments:

image