AnswerDotAI / fsdp_qlora

Training LLMs with QLoRA + FSDP
Apache License 2.0
1.38k stars 185 forks source link

DoRA training not taking dropout or alpha into account #68

Open BenjaminBossan opened 1 month ago

BenjaminBossan commented 1 month ago

I think there is a bug in the DoRA implementation as it takes neither lora_dropout nor lora_alpha into account. These arguments are passed as *args to the __init__ call of the DoRA layers but subsequently ignored inside of dora.py. This can be easily missed as the DoRA paper does not include them in their equations, but they are mentioned elsewhere in the paper and should be applied the same as in the LoRA implementation.

Also note that lora_dropout is only applied to the LoRA/DoRA output, not the base model output, which I believe has an impact on these lines, as they currently assume that the same x is used for the base layer and the DoRA part.