Closed Dawn-LX closed 1 year ago
I'm following your code to train LoRA for DreamBooth, but here is a bug in the training loop.
https://github.com/cloneofsimo/lora/blob/bdd51b04c49fa90a88919a19850ec3b4cf3c5ecd/training_scripts/train_lora_dreambooth.py#L888
the optimizer.zero_grad() should be called before accelerator.backward(loss).
optimizer.zero_grad()
accelerator.backward(loss)
I'm following your code to train LoRA for DreamBooth, but here is a bug in the training loop.
https://github.com/cloneofsimo/lora/blob/bdd51b04c49fa90a88919a19850ec3b4cf3c5ecd/training_scripts/train_lora_dreambooth.py#L888
the
optimizer.zero_grad()
should be called beforeaccelerator.backward(loss)
.