cloneofsimo / lora

Using Low-rank adaptation to quickly fine-tune diffusion models.
https://arxiv.org/abs/2106.09685
Apache License 2.0
6.99k stars 481 forks source link

the `optimizer.zero_grad()` is called after loss backward #263

Closed Dawn-LX closed 1 year ago

Dawn-LX commented 1 year ago

I'm following your code to train LoRA for DreamBooth, but here is a bug in the training loop.

https://github.com/cloneofsimo/lora/blob/bdd51b04c49fa90a88919a19850ec3b4cf3c5ecd/training_scripts/train_lora_dreambooth.py#L888

the optimizer.zero_grad() should be called before accelerator.backward(loss).