Open knishika62 opened 1 week ago
- Even after changing Max Train Epochs from 16 to 10, cli option remains 16
- When changing to VRAM 16GB, --optimizer_type adamw8bit remains and -optimizer_type adafactor is added after it.
- Save every N epochs is fixed at 4 and cannot be changed
Can you try pulling the latest version? I just pushed an update that fixes all the customizability issues https://github.com/cocktailpeanut/fluxgym/commit/a118913e3b18c91d143ec334662cca3ccd1859a4
- The default Learning Rate of 1e-4 is too low. I tried 1e-3, 8e-4, 4e-4 and the correct answer is 8e-4?
Currently I'm deliberately using the configs recommended by kohya. In this case 1e-4 is recommended. Maybe you could ask kohya if 1e-4 is actually the best option, and if they change the recommended settings, i'll do the same https://github.com/kohya-ss/sd-scripts/tree/sd3?tab=readme-ov-file#flux1-lora-training
- Resize dataset images default to 1024
Could you explain what you mean here? Are you saying the images are being resized to 1024? I just checked and they are correctly being resized to 512 or 1024 depending on the radio button you select in the UI.
Can you try pulling the latest version? I just pushed an update that fixes all the customizability issues
I confirmed. thank you
Resize dataset images default to 1024
The default is now 512, but wouldn't 1024 be better? It means
kohya. In this case 1e-4 is recommended.
I know this from github. Mr. kohya is also a follower and we occasionally talk, and he says that things will slow down a bit in 1e-4. So I'm trying with 1e-3, 8e-4, 6e-4, 4e-4, etc. In the photo, 8e-4 looks good. https://x.com/kohya_tech/status/1832237684758343898
The default is now 512, but wouldn't 1024 be better? It means
Using 1024 seems to use up much more VRAM and takes much longer, which is why it's not the default, since the whole point of this project is to make it easy to train on low VRAM machines.
I know this from github. Mr. kohya is also a follower and we occasionally talk, and he says that things will slow down a bit in 1e-4. So I'm trying with 1e-3, 8e-4, 6e-4, 4e-4, etc. In the photo, 8e-4 looks good.
Let me try 8e-4 myself, will ask on X as well.
Using 1024 seems to use up much more VRAM and takes much longer, which is why it's not the default, since the whole point of this project is to make it easy to train on low VRAM machines.
Right now, 1024 is good quality, but it takes more time to study. I was about to add that I agree with 512 on this matter... I got it.
Thank you very much!
Thank you fluxgym. Japanese communities are also happy that it is easy to install. By the way, there are a few things I'm curious about.
I would appreciate it if you could reconfirm the cli options for parameter-related changes. (Sorry for using google translate)