Nerogar / OneTrainer

OneTrainer is a one-stop solution for all your stable diffusion training needs.
GNU Affero General Public License v3.0
1.73k stars 144 forks source link

[Feat]: Select the non-EMA checkpoint for Stable Cascade #190

Open sjuxax opened 7 months ago

sjuxax commented 7 months ago

Describe your use-case.

Trying to train a Stable Cascade LoRA and find the non-EMA samples are much better. I'm now faced with trying to create a safetensors file that uses the EMA weights. At https://github.com/Nerogar/OneTrainer/issues/116#issuecomment-1879358111, @Nerogar mentioned this was possible with some manual manipulation and the Model Convert tool. Instructions on how to do that would be nice, but the Model Convert dropdown doesn't have Stable Cascade as an option, so adding it may be a pre-requisite.

What would you like to see as a solution?

Ideally, make a simple-to-copy no-ema safetensors file alongside the ema one.

Probably also requires Model Convert for Cascade.

Have you considered alternatives? List them here.

N/A

sjuxax commented 7 months ago

It looks like we can just use the lora.safetensors file to drop the EMA, working for me in ComfyUI. It also looks like I can resume training without EMA by renaming the EMA folder and settings EMA to OFF in the backup's config json. It'd be nice if this was documented somewhere. Thanks.

311-code commented 7 months ago

Edit: I just ended up reading the kohya-ss scripts cascade branch and got the answer to quesiton: The first time, specify --text_model_checkpoint_path and --save_text_model to save the Text Encoder weights. From the next time, specify --text_model_checkpoint_path to load the saved weights.

So I assume Onetrainer is doing two models exported also. Official learning rate default is 1e-4 (0.0001) and official settings use bf16 for training. I think I was accidently using fp16 which says is unstable.