kohya-ss / sd-scripts

Apache License 2.0
4.91k stars 823 forks source link

Support FLUX series models #1445

Open ddpasa opened 1 month ago

ddpasa commented 1 month ago

These models have just been released and appear to be amazing. Links below:

Blog from fal.ai: https://blog.fal.ai/flux-the-largest-open-sourced-text2img-model-now-available-on-fal/

Huggingface: https://huggingface.co/black-forest-labs

There is a schell version and dev version.

Oliverkuien commented 1 month ago

very agree!

LazyCat420 commented 1 month ago

is it possible to finetune the model on a 3090 or do we have to do a lora due to the size?

ThereforeGames commented 1 month ago

I'm wondering if image gen models would benefit from the sophisticated quantization methods that are popular in the LLM space, like GGUF. Any ongoing research in this area?

Apparently some folks have trained LoRAs on quantized LLMs to good effect, e.g. https://old.reddit.com/r/LocalLLaMA/comments/13q8zjc/how_much_why_does_quantization_negatively_affect/

leonary commented 1 month ago

I totally agree. Since SD3 may not be able to fit a slightly larger dataset due to model problems (scripts include SimpleTuner, SD-scripts, OneTrainer), it is recommended to stop developing SD3 training scripts. I did a simple test on Flux-dev, and its capabilities are completely superior to SD3. Here are some examples: zyi7Khvc7wnYcFh64fENU DFVgNwtZxrLxJGkulrcN0 8ihTzsxMXqggjHJyC-N8k fWPcyz5JIePCHseGBZqLj It’s worth pointing out that this is the first model I’ve seen that can correctly draw the position of the umbrella handle and the umbrella cover, and the text prompt on the road sign is “iiilllllbddbwW”. Although the AI ​​didn’t draw it correctly, I haven’t seen any model that can draw it correctly either.

dill-shower commented 1 month ago

I totally agree. Since SD3 may not be able to fit a slightly larger dataset due to model problems (scripts include SimpleTuner, SD-scripts, OneTrainer), it is recommended to stop developing SD3 training scripts. I did a simple test on Flux-dev, and its capabilities are completely superior to SD3. Here are some examples:

I strongly disagree. While the SD3 Medium model has certain drawbacks, it possesses a crucial advantage that FLUX lacks: its weights are publicly available. In contrast, FLUX only provides access to the base model's weights through an API, with no indication or information suggesting they plan to make it open-source. The models that are publicly accessible are derived through distillation of the base model; they are truncated, incomplete, and practically unsuitable for further training. It only makes sense to train the model we weren't given, as fine-tuning the distilled models would require roughly the same effort as training from scratch, if not more. Even the SDXL model was superior in this regard.

Calling it open-source is akin to labeling GPT-4o as open-source simply because we were given GPT-3 weights and the ability to fine-tune it. I'm concerned that we'll be wasting time that could be better spent studying SD3, debugging and optimizing its training script. SD3 has more potential, and Stability AI has promised to eventually release all models, including their weights, as open-source. This makes SD3 a more promising avenue for our efforts

leonary commented 1 month ago

I strongly disagree. While the SD3 Medium model has certain drawbacks, it possesses a crucial advantage that FLUX lacks: its weights are publicly available. In contrast, FLUX only provides access to the base model's weights through an API, with no indication or information suggesting they plan to make it open-source. The models that are publicly accessible are derived through distillation of the base model; they are truncated, incomplete, and practically unsuitable for further training. It only makes sense to train the model we weren't given, as fine-tuning the distilled models would require roughly the same effort as training from scratch, if not more. Even the SDXL model was superior in this regard.

Hello, the weights for the Flux series models have been released, including the dev version and the schnell version. The weights for the Pro version have not been released and can only be accessed via API, but the performance gap between the dev version and the Pro version is not significant, and both should have surpassed SD3. You can find their weights here: flux_dev flux_schnell Diffusers have initial support for LoRA training with Flux, which you can find here: diffusers SimpleTuner has initial compatibility with Flux's LoRA training in their scripts, which you can find here: SimpleTuner ComfyUI now supports Flux and its initial LoRA, which you can find here: ComfyUI

dill-shower commented 1 month ago

Hello, the weights for the Flux series models have been released, including the dev version and the schnell version.

Please read this https://blog.fal.ai/flux-the-largest-open-sourced-text2img-model-now-available-on-fal/ Dev and schnell are obtained by distillation of pro scales. It is possible to create LoRAs for them, they will work. But full model training is practically impossible because of this

ddpasa commented 1 month ago

Hello, the weights for the Flux series models have been released, including the dev version and the schnell version.

Please read this https://blog.fal.ai/flux-the-largest-open-sourced-text2img-model-now-available-on-fal/ Dev and schnell are obtained by distillation of pro scales. It is possible to create LoRAs for them, they will work. But full model training is practically impossible because of this

It should be possible to fine-tune distilled models.

dill-shower commented 1 month ago

> It should be possible to fine-tune distilled models.

Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/

ddpasa commented 1 month ago

It should be possible to fine-tune distilled models.

Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/

That is because the training code for Turbo was never released and nobody wrote one. It's not fundamentally impossible.

bghira commented 1 month ago

even training schnell with lora or full tune is fine. they're just big models and require the use of LoRA with quantised base weights, but Kohya should probably wait for the bugs to be worked out in Quanto first before going ahead and trying to integrate it. it makes a mess of the model state dict keys.

BenDes21 commented 1 month ago

@kohya-ss Training scripts released : https://github.com/XLabs-AI/x-flux

bghira commented 1 month ago

those are pretty minimal and eg. it doesn't implement cosmap/logit-norm or any of the SD3 training details, just about the same as cloneofsimo/minRF implementation. in fact it's basically identical - the interesting thing there is probably their ControlNet training implementation details

FurkanGozukara commented 1 month ago

diffusers scripts arrived

https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md

ddpasa commented 1 month ago

diffusers scripts arrived

https://github.com/huggingface/diffusers/blob/main/examples/dreambooth/README_flux.md

@FurkanGozukara , you are amazing as usual!

FurkanGozukara commented 1 month ago

@ddpasa thanks

pull request arrived already :d

https://github.com/kohya-ss/sd-scripts/pull/1374/files/da4d0fe0165b3e0143c237de8cf307d53a9de45a..36b2e6fc288c57f496a061e4d638f5641c32c9ea

cyan2k commented 1 month ago

It should be possible to fine-tune distilled models.

Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/

Do you guys also love it when someone is so confidently incorrect?

My flux finetune is coming in very nicely. Huge upgrade compared to SDXL and Pony, also way more trainable than SD3 Medium. It's literally impossible to add NSFW to SD3 medium because of the complete lack of NSFW content in its training data. No finetuner is going to finish SAI's pathetic job. Nobody is ever going to create any kind of content for SD3 when you can create better results for the same money with flux. so yeah, rip.

Flux seems to have seen plenty of NSFW images, and it's just filtered and dropped out via captioning. So the context and knowledge already exists in the latent space, and it only needs to... well get finetuned.

So, yeah f*ck SD3. Pyro's NSFW model goes FLUX.

protector131090 commented 1 month ago

It should be possible to fine-tune distilled models.

Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/

Do you guys also love it when someone is so confidently incorrect?

My flux finetune is coming in very nicely. Huge upgrade compared to SDXL and Pony, also way more trainable than SD3 Medium. It's literally impossible to add NSFW to SD3 medium because of the complete lack of NSFW content in its training data. No finetuner is going to finish SAI's pathetic job. Nobody is ever going to create any kind of content for SD3 when you can create better results for the same money with flux. so yeah, rip.

Flux seems to have seen plenty of NSFW images, and it's just filtered and dropped out via captioning. So the context and knowledge already exists in the latent space, and it only needs to... well get finetuned.

So, yeah f*ck SD3. Pyro's NSFW model goes FLUX.

what are you talking about? i trained 3.0 for 30 minutes and it can generate nsfw just fine. NSFW link https://imgur.com/a/sd-30-test-G7G7G6u

dill-shower commented 1 month ago

It should be possible to fine-tune distilled models.

Why should it? I just did a quick search for information about training SDXL Turbo, and it turns out it was also obtained through distillation from the base model. There are tons of such models on Civitai, but they're all created by merging SDXL Turbo with something else. I couldn't find a single one obtained through fine-tuning.The only relevant post I came across was a complaint on Reddit about how training SDXL Turbo produces very poor results. As I expected. https://www.reddit.com/r/StableDiffusion/comments/18l2qp0/sdxl_turbo_fine_tunemerging/

Do you guys also love it when someone is so confidently incorrect? My flux finetune is coming in very nicely. Huge upgrade compared to SDXL and Pony, also way more trainable than SD3 Medium. It's literally impossible to add NSFW to SD3 medium because of the complete lack of NSFW content in its training data. No finetuner is going to finish SAI's pathetic job. Nobody is ever going to create any kind of content for SD3 when you can create better results for the same money with flux. so yeah, rip. Flux seems to have seen plenty of NSFW images, and it's just filtered and dropped out via captioning. So the context and knowledge already exists in the latent space, and it only needs to... well get finetuned. So, yeah f*ck SD3. Pyro's NSFW model goes FLUX.

what are you talking about? i trained 3.0 for 30 minutes and it can generate nsfw just fine. NSFW link https://imgur.com/a/sd-30-test-G7G7G6u

Someone just reads too much reddit and similar places where everyone is convinced that if a model wasn't trained on nsfw then they will never be able to create such things. How they used to create models for anime, furry and the rest for sdxl no one knows. Lost technology

In all seriousness, there's nothing stopping sd3 from learning to create any nsfw content and even worse. Due to the more efficient architecture, training does not require as much GPU overhead as sdxl.

I don't understand why everyone is so crazy with this FLUX and minus my comment that it has no scales and access only by api

cyan2k commented 1 month ago

Someone just reads too much reddit and similar places where everyone is convinced that if a model wasn't trained on nsfw then they will never be able to create such things.

We (group of SDXL finetuners) spend like 5k bucks making NSFW in SD3 work, but a model that can't even render women lying in grass is so lobotomized that re-introducing NSFW takes immense ressources, as in the ballpark of SAI's training infrastructure. No hobby finetuner is going to pay for that. Nobody is going to pay for that, if they can get way better results for a fraction of the cost with FLUX.

It's not hard to understand. It took 20 bucks to teach FLUX NSFW concepts.... 5k$ vs 20$ pretty clear cut.

How they used to create models for anime, furry and the rest for sdxl no one knows. Lost technology

Well it seems that you don't know the basic of how training such models work and how self-organisation of embeddings in the latent space works. LAION, the data corpus of SDXL, is full of furry and anime shit. SD3 data corpus has exactly 0 NSFW images in it. And you honestly have difficulties to understand why one is trainable and the later isn't? You're on the wrong board then.

Please stop talking about things you don't have a clue about.

Also FLUX is runnable locally and the weights are public, so I don't even know what " it has no scales and access only by api" even means.

dill-shower commented 1 month ago

We (group of SDXL finetuners) spend like 5k bucks making NSFW in SD3 work, but a model that can't even render women lying in grass is so lobotomized

Stabilityai promised to release the 3.1 model soon. They promised to fix this problem in it. You've been too quick to educate yourself

Well it seems that you don't know the basic of how training such models work and how self-organisation of embeddings in the latent space works. LAION, the data corpus of SDXL, is full of furry and anime shit

When sdxl came out it was written about on reddit the same thing they are now writing about sd3. That it didn't use NSFW content in training so nsfw training is impossible, "it's a terrible model, stabilityai killed their reputation by refusing to train on nsfw content, we can't use it, we stay on sd1.5".... Just like they wrote about sd2.... Let's wait a year and find out that there was nsfw in the sd3 dataset but it was removed from the sd4 dataset so we stay on sd3 and boycott the new model....

Also FLUX is runnable locally and the weights are public

Please give me link to download Flux-pro model

D3voz commented 1 month ago

People are using simple tuner for flux lora creation. Unfortunately it has no windows support. Waiting for kohya ss :) . Flux dev is so much better than sd3 💯

kohya-ss commented 1 month ago

People are using simple tuner for flux lora creation. Unfortunately it has no windows support. Waiting for kohya ss :) . Flux dev is so much better than sd3 💯

Now sd3 branch supports FLUX.1 dev LoRA training experimentally :) https://github.com/kohya-ss/sd-scripts/tree/sd3

leonary commented 1 month ago

Stabilityai promised to release the 3.1 model soon. They promised to fix this problem in it. You've been too quick to educate yourself

If SD3.1 could achieve the performance of Flux Dev while allowing training and sharing, and if the machine costs required for fine-tuning are lower than those of Flux Dev, I would be very willing to use SD3.1. However, given the performance of SD3 8b and the licensing of the SD3 series, I am pessimistic about this possibility.

leonary commented 1 month ago

Now sd3 branch supports FLUX.1 dev LoRA training experimentally :) https://github.com/kohya-ss/sd-scripts/tree/sd3

Thank you for your excellent work. The fine-tuning effect of sd-scripts with Flux has completely met my expectations, and its performance is on par with Simple Tuner.

Additionally, is there any plan to support Flux in some of the LoRA processing scripts? These scripts could help the community more quickly develop models like "detail enhancer."

Tophness commented 1 month ago

People are using simple tuner for flux lora creation. Unfortunately it has no windows support. Waiting for kohya ss :) . Flux dev is so much better than sd3 💯

Now sd3 branch supports FLUX.1 dev LoRA training experimentally :) https://github.com/kohya-ss/sd-scripts/tree/sd3

Will this work for the NF4 model that was released yesterday? Up to 4x speedups, reduced vram, increased quality.

https://civitai.com/models/638572/nf4-flux1 https://github.com/lllyasviel/stable-diffusion-webui-forge/discussions/981

bghira commented 1 month ago

you don't need an A100 for flux. imo kohya should release sooner than keep trying to add the million features. you can train on 16G VRAM without any quantisation at all.

Tophness commented 1 month ago

you don't need an A100 for flux. imo kohya should release sooner than keep trying to add the million features. you can train on 16G VRAM without any quantisation at all.

It did in other trainers such as your own, but yeah apparently not anymore. image

The NF4 model is far superior though and more accessible for inference. FP8 used to be virtually unusable on my 4080 because it'd take about 5-10 mins for 1 overquantized generation since it overloads my shared memory, and now it's <1 min for outputs that look on par with Pro. Don't really wanna waste a week training an FP8 model that's already obsolete and can't be used by most people.

bghira commented 1 month ago

it's not like that at all though. fp8 is fine, especially in pytorch 2.4. you can read back through the comments in this issue to see.

bghira commented 1 month ago

also, NF4 is definitely not "on par with Pro" 🤪

Tophness commented 1 month ago

In one of the links above lllyasviel talks about quality theoretically and in benchmarks:

NF4 may outperform FP8 (e4m3fn/e5m2) in numerical precision, and it does outperform e4m3fn/e5m2 in many (in fact, most) cases/benchmarks. NF4 is technically granted to outperform FP8 (e4m3fn/e5m2) in dynamic range in 100% of cases. This is because FP8 just converts each tensor to FP8, while NF4 is a sophisticated method to convert each tensor to a combination of multiple tensors with float32, float16, uint8, int4 formats to achieve maximized approximation.

I was using torch 2.4, so idk what to tell you. A lot of people are saying the same thing. Prominent youtubers testing it are saying they get ~4x speedups even on a 4090:

the regular flux Dev takes about 50 seconds to run on my machine on a RTX 4090, this takes 14 seconds to run https://www.youtube.com/watch?v=L5BjuPgF1Ds

cyan2k commented 1 month ago

@bghira finally got your shit running, and the results are amazing! thanks for entertaining me on reddit, hope I didn't piss you off all that much :)

I forked your repo, and made some fool-proof 1-click installer shizzle for runpod. just run two bash scripts and everything is set up and ready to go. if that's something you would like in your repo also just take it

bghira commented 1 month ago

i am glad it is working for you now.

cyan2k commented 1 month ago

Perhaps you misread me, or mixed me up with someone else, but I never used the n-word in my life. I may be an asshole, but at least I'm not a racist asshole.

bghira commented 1 month ago

must have just been a case of mistaken identity - that is my mistake now, apologies to @cyan2k

AmericanPresidentJimmyCarter commented 1 month ago

@cyan2k Yeah sorry, that was some guy on civitai. The haters have been out in full force lately

mliand commented 1 month ago

The dev and snhnel versions of flux are based on the distillation model of the pro version. It is meaningless to support fine-tuning. It is completely impossible to learn complex tasks. It seems that there are still a large number of amateur users in this discussion.

AmericanPresidentJimmyCarter commented 1 month ago

The dev and snhnel versions of flux are based on the distillation model of the pro version. It is meaningless to support fine-tuning. It is completely impossible to learn complex tasks. It seems that there are still a large number of amateur users in this discussion.

I trained cfg back into flux.dev with a 6 hour LoRA accidentally. If you just set the "fake" CFG value (the one they baked into with distillation) to anything greater than 1, the distillation breaks and CFG is restored. You just have to skip the first couple of steps because for some reason I don't yet understand if you use CFG on them it makes blurry images.

concatenated_image

@ostris is training CFG back into the schnell model on huggingface, dunno how that's going.

dill-shower commented 4 weeks ago

I trained cfg back into flux.dev with a 6 hour LoRA accidentally.

LoRA ≠ full finetune

bghira commented 4 weeks ago

^ this is incorrect. see LoRA paper about rank 128.

pyros-sd-models commented 4 weeks ago

I trained cfg back into flux.dev with a 6 hour LoRA accidentally.

LoRA ≠ full finetune

Are you speedrunning "how often can I be wrong in one thread"?

In the LLM world, "making a LoRA" and "fine-tuning" are essentially synonymous. For example, the most popular library for fine-tuning LLMs, https://github.com/unslothai/unsloth, only mentions fine-tunes, but it actually produces LoRAs. Crazy, right?

Somehow, only the Stable Diffusion community feels the need to differentiate between the two.

Mathematically, it doesn't really matter whether you change the weights directly in the model or through an adapter.

I always cringe when I read, "I prefer fine-tuning and then extracting a LoRA to directly training a LoRA," especially since Dora is a thing now. There's absolutely no reason to do a complete and expensive fine-tune if your goal is to create a LoRA.

But, well, the SD community is pretty big on bro-science. It won't take long until someone tries to convince me that fine-tuning > LoRA or some nonsense, because in their anecdotal N=2 experiment, one image was subjectively nicer than the other, and it just so happened to be a full moon.

b-7777777 commented 4 weeks ago

Latest update to SD3 branch script made this smooth sailing for 12GB vram flux lora training, I've got a 4 hour timer on 1600 steps at the recommended settings and its at most using 7GB of my 12GB GPU. BLESS YOU KOHYA!!! <3

D3voz commented 4 weeks ago

For some reason my flux lora trained on kohya ss when using in comfyui runs extremely slow. I get 200s/it with the kohya flux lora while I get 1.8s/it using other flux lora from civitai

leonary commented 3 weeks ago

For some reason my flux lora trained on kohya ss when using in comfyui runs extremely slow. I get 200s/it with the kohya flux lora while I get 1.8s/it using other flux lora from civitai

Is the slow inference possibly due to insufficient VRAM or RAM? You could try training LoRA with a lower dim value, or consider using a machine with more VRAM and RAM for inference. The fine-tuning results of Kohya's script are quite good; this is my result of fine-tuning Flux using Kohya's script, and there hasn't been any slowdown in inference speed.

image

D3voz commented 3 weeks ago

Yes, using my kohya trained lora eats all my vram in f8 , tried with 16gb vram and 20 gb vram machines . Using other lora eats like around 12 gb vram . I might have done something wrong . I did use 512 resolution which gave me around 3sec/it during training on 4060ti 16 gb . I used this - accelerate launch --mixed_precision bf16 --num_cpu_threads_per_process 1 flux_train_network.py --pretrained_model_name_or_path "E:/ComfyUI/models/checkpoints/flux1-dev.safetensors" --clip_l "E:/ComfyUI/models/clip/clip_l.safetensors" --t5xxl "E:/ComfyUI/models/clip/t5xxl_fp16.safetensors" --ae "E:/ComfyUI/models/vae/ae.safetensors" --cache_latents_to_disk --save_model_as safetensors --sdpa --persistent_data_loader_workers --max_data_loader_n_workers 2 --seed 42 --gradient_checkpointing --mixed_precision bf16 --save_precision bf16 --network_module networks.lora_flux --network_dim 4 --optimizer_type adafactor --optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False" --learning_rate 1e-4 --network_train_unet_only --cache_text_encoder_outputs --cache_text_encoder_outputs_to_disk --fp8_base --highvram --max_train_epochs 100 --save_every_n_epochs 10 --sample_every_n_steps 200 --sample_prompts "D:/github/ee/pt.txt" --sample_sampler "euler" --dataset_config "D:/github/ee/fx.toml" --output_dir "D:/github/kohya_ss/outputs" --output_name flux-lora-name --timestep_sampling sigmoid --model_prediction_type raw --guidance_scale 1.0 --loss_type l2 (might have changed a little if it gave error but this is what i saved on my text file to copy paste again)

leonary commented 3 weeks ago

Yes, using my kohya trained lora eats all my vram in f8 , tried with 16gb vram and 20 gb vram machines . Using other lora eats like around 12 gb vram .

That's strange because a LoRA with a dim value of 4 shouldn't consume more VRAM. Assuming that most of the LoRAs on Civitai are currently fine-tuned with Simple Tuner, and considering that both Simple Tuner and SD scripts are only training the Unet, the VRAM requirements should be similar if the file sizes are comparable. Therefore, the slowdown in inference speed is likely due to cached data in VRAM and RAM that hasn't been cleared. You could try restarting your computer to fully clear the VRAM and RAM, or use some ComfyUI plugins to unload the cached data from RAM and VRAM, and then test the inference speed of SD scripts' LoRA again.

bghira commented 3 weeks ago

LoRAs are fused during inference time in ComfyUI. there should be no slowdown

D3voz commented 3 weeks ago

om1 ome For me it just goes out of memory when i use my lora trained on kohya . Lora rank 4 alpha 1 .

leonary commented 3 weeks ago

For me it just goes out of memory when i use my lora trained on kohya . Lora rank 4 alpha 1 .

Would you mind sharing this lora? I would like to see how much video memory this lora would occupy on my machine.

D3voz commented 3 weeks ago

Unfortunately the lora is a person lora(ai person but still not mine) so i cannot send it. I will train a celeb lora again tomorrow and will send the link. Only thing i can think of is that i did cropped the images to 512x512 as i am using 512 res for training . It is sad because the sampling images during training were very good and close to the actual look. Here 1st one is training sampling image and second one is training image. cv Edit- The lora worked when using with forge and loading the single f8 version model. It takes just around 12gb vram . Forge does patch the lora. I am not sure if comfy also needs that. The quality of the lora is very good but face likeness is not as much as the images generated during training sampling(maybe a forge thing).