Closed XiaoBuL closed 6 months ago
Yes, the fine-tuned SD-1.5 or SD-XL may gain some improvements with our training data. However, for the following 2 reasons:
Thus, we didn't fine-tune SD-1.5 or SD-XL.
Thanks for your reply!
You may ignore the tokens that are more than 77 to fine-tune the SD-1.5 or SD-XL.
I'm still curious about whether the lift is from the LLM or the training data.
Actually, we're quite curious about it. We'll try to gather enough GPU to finetune SD1.5 ~or SDXL~
Thanks! Looking forward to your results!
we fine-tuned the whole U-Net of SD v1.5 using the proposed datasets while adhering to the same training hyperparameters as employed for ELLA-SD1.5, which incorporates T5-XL and TSC. Both models underwent training for 140,000 optimization steps, corresponding to approximately one epoch:
Hello, thanks for your great work.
I'm curious about the effect of training data. Did you ever directly fully fine-tune the SD-1.5 or SD-XL model on the training data?
I guess fine-tuning SD-1.5 can also benefit from the training data, e.g. the T2I-CompBench performance.
Can you report the performance of fine-tuned SD-1.5 or SD-XL on your training data?
Thanks!