lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.65k stars 858 forks source link

Patching LoRA weights to Flux dev16 mode. #2208

Open iqddd opened 1 month ago

iqddd commented 1 month ago

Why on flux.dev16 model I have to wait for LoRA weights patching even if Automatic mode (fp16 LoRA) is set?

derpina-ai commented 4 weeks ago

It is probably a fp8 lora?

iqddd commented 4 weeks ago

It is probably a fp8 lora?

I'm not sure. Here some train specs: Base model: flux-1-dev (fp16) Acceleration mixed precision: BF16 Train fp8: True Save as: FP16