Akegarasu / lora-scripts

LoRA & Dreambooth training scripts & GUI use kohya-ss's trainer, for diffusion model.
GNU Affero General Public License v3.0
4.39k stars 540 forks source link

UnboundLocalError: local variable 'text_encoder_conds' referenced before assignment 错误而失败 #497

Closed Auska0924 closed 2 weeks ago

Auska0924 commented 2 weeks ago

17:18:45-353400 INFO Training started with config file / 训练开始,使用配置文件: C:\lora-scripts-v1.8.1\config\autosave\20240830-171845.toml 17:18:45-356394 INFO Task b86d93d3-6a6c-4824-b4bd-7ce3cc9329ac created 2024-08-30 17:18:50 INFO Loading settings from train_util.py:4189 C:\lora-scripts-v1.8.1\config\autosave\20240830-171845.toml... INFO C:\lora-scripts-v1.8.1\config\autosave\20240830-171845 train_util.py:4208 2024-08-30 17:18:50 INFO Using v1 tokenizer strategy_sd.py:26 2024-08-30 17:18:51 INFO Using DreamBooth method. train_network.py:281 INFO prepare images. train_util.py:1803 INFO get image size from name of cache files train_util.py:1741 100%|████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 9875.15it/s] INFO set image size from cache files: 0/15 train_util.py:1748 INFO found directory C:\lora-scripts-v1.8.1\train\Auska\8_Auska\5_zkz train_util.py:1750 contains 15 image files INFO 75 train images with repeating. train_util.py:1844 INFO 0 reg images. train_util.py:1847 WARNING no regularization images / 正則化画像が見つかりませんでした train_util.py:1852 INFO [Dataset 0] config_util.py:570 batch_size: 1 resolution: (512, 512) enable_bucket: True network_multiplier: 1.0 min_bucket_reso: 256 max_bucket_reso: 1024 bucket_reso_steps: 64 bucket_no_upscale: False

                           [Subset 0 of Dataset 0]
                             image_dir: "C:\lora-scripts-v1.8.1\train\Auska\8_Auska\5_zkz"
                             image_count: 15
                             num_repeats: 5
                             shuffle_caption: True
                             keep_tokens: 0
                             keep_tokens_separator:
                             caption_separator: ,
                             secondary_separator: None
                             enable_wildcard: False
                             caption_dropout_rate: 0.0
                             caption_dropout_every_n_epoches: 0
                             caption_tag_dropout_rate: 0.0
                             caption_prefix: None
                             caption_suffix: None
                             color_aug: False
                             flip_aug: False
                             face_crop_aug_range: None
                             random_crop: False
                             token_warmup_min: 1,
                             token_warmup_step: 0,
                             alpha_mask: False,
                             is_reg: False
                             class_tokens: zkz
                             caption_extension: .txt

                INFO     [Dataset 0]                                                              config_util.py:576
                INFO     loading image sizes.                                                      train_util.py:876

100%|██████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<?, ?it/s] INFO make buckets train_util.py:882 INFO number of images (including repeats) / train_util.py:928 各bucketの画像枚数(繰り返し回数を含む) INFO bucket 0: resolution (512, 512), count: 75 train_util.py:933 INFO mean ar error (without repeats): 0.0 train_util.py:938 INFO preparing accelerator train_network.py:335 accelerator device: cuda INFO loading model for process 0/1 train_util.py:4811 INFO load StableDiffusion checkpoint: train_util.py:4767 C:/lora-scripts-v1.8.1/sd-models/meinamix_meinaV11.safetensors INFO UNet2DConditionModel: 64, 8, 768, False, False original_unet.py:1387 2024-08-30 17:18:54 INFO loading u-net: model_util.py:1009 INFO loading vae: model_util.py:1017 2024-08-30 17:18:55 INFO loading text encoder: model_util.py:1074 2024-08-30 17:18:56 INFO Enable xformers for U-Net train_util.py:3053 import network module: networks.lora INFO [Dataset 0] train_util.py:2326 INFO caching latents with caching strategy. train_util.py:984 INFO checking cache validity... train_util.py:994 100%|██████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 263240.84it/s] INFO caching latents... train_util.py:1038 100%|██████████████████████████████████████████████████████████████████████████████████| 15/15 [00:02<00:00, 5.75it/s] 2024-08-30 17:18:59 INFO create LoRA network. base dim (rank): 32, alpha: 32 lora.py:935 INFO neuron dropout: p=None, rank dropout: p=None, module dropout: p=None lora.py:936 INFO create LoRA for Text Encoder: lora.py:1030 INFO create LoRA for Text Encoder: 72 modules. lora.py:1035 INFO create LoRA for U-Net: 192 modules. lora.py:1043 INFO enable LoRA for text encoder: 72 modules lora.py:1084 INFO enable LoRA for U-Net: 192 modules lora.py:1089 prepare optimizer, data loader etc. INFO use 8-bit AdamW optimizer | {} train_util.py:4342 override steps. steps for 10 epochs is / 指定エポックまでのステップ数: 750 running training / 学習開始 num train images * repeats / 学習画像の数×繰り返し回数: 75 num reg images / 正則化画像の数: 0 num batches per epoch / 1epochのバッチ数: 75 num epochs / epoch数: 10 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 750 steps: 0%| | 0/750 [00:00<?, ?it/s]2024-08-30 17:19:01 INFO unet dtype: torch.float16, device: cuda:0 train_network.py:1030 INFO text_encoder dtype: torch.float16, device: cuda:0 train_network.py:1032

epoch 1/10 2024-08-30 17:19:32 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:668 2024-08-30 17:19:32 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:668 2024-08-30 17:19:32 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:668 2024-08-30 17:19:32 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:668 2024-08-30 17:19:32 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:668 2024-08-30 17:19:32 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:668 2024-08-30 17:19:32 INFO epoch is incremented. current_epoch: 0, epoch: 1 train_util.py:668 Traceback (most recent call last): File "C:\lora-scripts-v1.8.1\sd-scripts\train_network.py", line 1396, in trainer.train(args) File "C:\lora-scripts-v1.8.1\sd-scripts\train_network.py", line 1088, in train text_encoder_conds is None UnboundLocalError: local variable 'text_encoder_conds' referenced before assignment steps: 0%| | 0/750 [00:31<?, ?it/s] 17:19:33-690734 ERROR Training failed / 训练失败

gardenia061002 commented 2 weeks ago

螢幕擷取畫面 2024-08-30 185924

去sd-scripts資料夾下載舊版v0.8.5 替換掉本地資料夾裡的sd-scripts資料夾內容 我目前可以正常運作

Akegarasu commented 2 weeks ago

fixed