Closed Atoli closed 7 months ago
Hey, I can reproduce it. Is there a solution for this? Maybe the learning rate need to be adjusted differentely? (rtx 3080TI)
Please report this to kohya on his sd-scripts repo. I only wrap his code with a gui. He is the one that need to fix this issue.
On Fri, Feb 16, 2024 at 13:01 3Dkirill @.***> wrote:
Hey, I can reproduce it. Is there a solution for this? Maybe the learning rate need to be adjusted differentely? (rtx 3080TI)
— Reply to this email directly, view it on GitHub https://github.com/bmaltais/kohya_ss/issues/1758#issuecomment-1948995409, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABZA34TNVDDHSLTGMNOZ3I3YT6NGVAVCNFSM6AAAAABAN2Z3PWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNBYHE4TKNBQHE . You are receiving this because you modified the open/close state.Message ID: @.***>
Thank you for the response and for your work
Any LoRA i train with a higher network alpha higher than 1 will end in NaN loss and subsequently not work when tested:
Here are my full settings, it's a standard LoRA training, nothing out of the blue or unusual:
In text version: