kohya-ss / sd-scripts

Apache License 2.0
5.03k stars 843 forks source link

FLUX LoRA Training - Text Encoder active training indicated when it should be disabled with --network_train_unet_only #1634

Open Enyakk opened 1 week ago

Enyakk commented 1 week ago

The log for my training shows: INFO create LoRA network. base dim (rank): 128, alpha: 128 lora_flux.py:594 INFO neuron dropout: p=0.25, rank dropout: p=None, module dropout: p=None lora_flux.py:595 INFO split qkv for LoRA lora_flux.py:603 INFO train all blocks only lora_flux.py:605 INFO create LoRA for Text Encoder 1: lora_flux.py:741 INFO create LoRA for Text Encoder 1: 72 modules. lora_flux.py:744 INFO create LoRA for FLUX all blocks: 6 modules. lora_flux.py:765 INFO enable LoRA for U-Net: 6 modules lora_flux.py:916

That is despite using the option --network_train_unet_only both by commandline and in the config-toml: network_train_unet_only = true. I use the following commandline: accelerate launch --num_cpu_threads_per_process 1 flux_train_network.py --persistent_data_loader_workers --max_data_loader_n_workers 2 --highvram --network_train_unet_only --config_file %1 config_lora.zip

This option might be out of the ordinary: network_args = [ "train_double_block_indices=none", "train_single_block_indices=7,20", "split_qkv=True",]

Enyakk commented 1 week ago

I have retested with a standard LoRA (training allblocks) and I get the same result of TE1 training being indicated in the log. I don't know if that's actually the case but it does seem that way to me. The size of the resulting .safetensor does indicate that it includes the additional TE1 modules though.

kohya-ss commented 1 week ago

INFO create LoRA for Text Encoder 1: lora_flux.py:741 INFO create LoRA for Text Encoder 1: 72 modules. lora_flux.py:744 INFO create LoRA for FLUX all blocks: 6 modules. lora_flux.py:765 INFO enable LoRA for U-Net: 6 modules lora_flux.py:916

This log indicates that several LoRA modules were created, but only six of them were trained as active modules.

Please run networks/check_lora_weights.py with python networks/check_lora_weights.py path/to/trained_lora.safetensors. It shows the keys in the safetensors, so uou can see which modules are included. If it contains modules for Text Encoders (keys start with lora_te), please report it. Otherwise only U-Net(DiT) is trained, and it's fine.