Open xingp-ng opened 7 months ago
Regarding your questions, since it is an optimization process. There's a possibility that training different LoRAs on the same image will yield different results due to different initializations. Sometimes, the optimization process struggles to 'learn' the given concept perfectly. I recommend training with different seeds, or adjusting other training parameters. And since it is a personalization technique, the resulting image may not align perfectly with the content image. I suggest integrating our approach with content preservation techniques like ControlNet, although I haven't personally tested this.
Ha-ha, interesting workaround. 5 B-LoRAs with different seeds have the comparable size of a LoRA model xD
We believe that we have reproduced the appropriate results, but there are still some questions that we would like to be answered.
The results are generated in such a way that it usually takes one out of 8 to find a suitable result, making it difficult to use them in practice.
These results may not be aligned with the content image.
Is there any technique to alleviate these problems?