Closed Jonseed closed 1 month ago
I've been using the Q8_0 GGUF version of Flux-dev most of the time, since it is fast and most similar to fp16. But I wondered if that might be the cause of the issue. So I switched to the NF4 version, and the images are much sharper. The fp8 version is also ok. Why would the 8-bit quant version of flux produce bad results with the lora?
I think I might have had "Diffusion in low bits" set at Automatic. I switched it to "Automatic (fp16 LoRA)" and that seems to have fixed it.
I used fluxgym to make a lora on my 3060 12GB, following all instructions, and keeping most defaults. I had 36 good images, 512x512, all captioned. The sample images during training look nice and sharp. But when I go to use the lora in Forge, it produces blurry noisy results every time. Why would the samples during training look ok, but then look blurry/noisy in Forge?