Open squewel opened 3 days ago
Cc: @linoytsaban
I've added a fix, given that we don't tune the T5 text encoder in flux, so that line is unnecessary (possibly a typo). Let me know if that's not the case, thanks!
Thanks @biswaroop1547! indeed for now we support full fine-tuning of the CLIP encoder only when --train_text_encoder is enabled
Describe the bug
train_dreambooth_lora_flux.py when running with --train_text_encoder --optimizer="prodigy" causes IndexError: list index out of range because of this:
09/18/2024 20:06:33 - WARNING - main - Learning rates were provided both for the transformer and the text encoder- e.g. text_encoder_lr: 5e-06 and learning_rate: 1.0. When using prodigy only learning_rate is used as the initial learning rate. Traceback (most recent call last): File "/content/diffusers/examples/dreambooth/train_dreambooth_lora_flux.py", line 1891, in
main(args)
File "/content/diffusers/examples/dreambooth/train_dreambooth_lora_flux.py", line 1375, in main
params_to_optimize[2]["lr"] = args.learning_rate
IndexError: list index out of range
Reproduction
run the sample script with params from the docs:
https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md
To perform DreamBooth LoRA with text-encoder training, run:
Logs
System Info
colab
Who can help?
@sayakpaul