Open iqddd opened 1 month ago
Why on flux.dev16 model I have to wait for LoRA weights patching even if Automatic mode (fp16 LoRA) is set?
It is probably a fp8 lora?
I'm not sure. Here some train specs: Base model: flux-1-dev (fp16) Acceleration mixed precision: BF16 Train fp8: True Save as: FP16
Why on flux.dev16 model I have to wait for LoRA weights patching even if Automatic mode (fp16 LoRA) is set?