in the LoRA Dreambooth script LoRA and Optimizer Config section, it says "if you want to train with higher dim/alpha so badly, try using higher learning rate. Because the model learning faster in higher dim" but from experiment I observed that lower lr is needed for higher dim, which makes sense since higher dim is more prone to overfitting. Is there more that I am not aware of?
in the LoRA Dreambooth script LoRA and Optimizer Config section, it says "if you want to train with higher dim/alpha so badly, try using higher learning rate. Because the model learning faster in higher dim" but from experiment I observed that lower lr is needed for higher dim, which makes sense since higher dim is more prone to overfitting. Is there more that I am not aware of?