Open FurkanGozukara opened 1 year ago
@DarkAlchy train_db.py
trains Text Encoder by default, for SD 1.5 and 2.0/2.1, as FurkanGozukara said.
If you do not want to train Text Encode, please add an option --stop_text_encoder_training=-1
.
@DarkAlchy
train_db.py
trains Text Encoder by default, for SD 1.5 and 2.0/2.1, as FurkanGozukara said.If you do not want to train Text Encode, please add an option
--stop_text_encoder_training=-1
.
=-1? Alright.
Here the executed command
accelerate launch --num_cpu_threads_per_process=2 "./train_db.py" --pretrained_model_name_or_path="/workspace/stable-diffusion-webui/models/Stable-diffusion/Realistic_Vision_V5.1.safetensors" --train_data_dir="/workspace/stable-diffusion-webui/models/Stable-diffusion/img" --reg_data_dir="/workspace/stable-diffusion-webui/models/Stable-diffusion/reg" --resolution="768,768" --output_dir="/workspace/stable-diffusion-webui/models/Stable-diffusion/model" --logging_dir="/workspace/stable-diffusion-webui/models/Stable-diffusion/log" --save_model_as=safetensors --full_bf16 --output_name="me_1e7" --lr_scheduler_num_cycles="4" --max_data_loader_n_workers="0" --learning_rate="1e-07" --lr_scheduler="constant" --train_batch_size="1" --max_train_steps="4160" --save_every_n_epochs="1" --mixed_precision="bf16" --save_precision="bf16" --cache_latents --cache_latents_to_disk --optimizer_type="Adafactor" --optimizer_args scale_parameter=False relative_step=False warmup_init=False weight_decay=0.01 --max_data_loader_n_workers="0" --bucket_reso_steps=64 --xformers --bucket_no_upscale --noise_offset=0.0
When text encoder is not trained it is supposed to print
Text Encoder is not trained.
This message is not printed either
So how do I know text encoder were not trained? Because I extracted LoRA and it says text encoder is same
I did 30 trainings and so many trainings are wasted because of this bug :/
@kohya-ss