Linaqruf / kohya-trainer

Adapted from https://note.com/kohya_ss/n/nbf7ce8d80f29 for easier cloning
Apache License 2.0
1.84k stars 302 forks source link

possible misleading tip of learning rate? #188

Open zhuliyi0 opened 1 year ago

zhuliyi0 commented 1 year ago

in the LoRA Dreambooth script LoRA and Optimizer Config section, it says "if you want to train with higher dim/alpha so badly, try using higher learning rate. Because the model learning faster in higher dim" but from experiment I observed that lower lr is needed for higher dim, which makes sense since higher dim is more prone to overfitting. Is there more that I am not aware of?

Linaqruf commented 1 year ago

Lol yeah, I mixed up on that part, I'll fix it later