liasece / sd-webui-train-tools

The stable diffusion webui training aid extension helps you quickly and visually train models such as Lora.
Other
401 stars 48 forks source link

train-tools error #10

Closed kingzeus closed 1 year ago

kingzeus commented 1 year ago

开始训练的时候报错

Train Tools: train.train error replace_unet_cross_attn_to_xformers.<locals>.forward_xformers() got an unexpected keyword argument 'encoder_hidden_states'
Traceback (most recent call last):
  File "D:\ai\stable-diffusion-webui\extensions\sd-webui-train-tools\liasece_sd_webui_train_tools\train_ui.py", line 121, in on_train_begin_click
    train.train(cfg)
  File "D:\ai\stable-diffusion-webui\extensions\sd-webui-train-tools\liasece_sd_webui_train_tools\train.py", line 68, in train
    train_network.train(args)
  File "D:\ai\stable-diffusion-webui\extensions\sd-webui-train-tools\liasece_sd_webui_train_tools\sd_scripts\train_network.py", line 538, in train
    noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\accelerate\utils\operations.py", line 495, in __call__
    return convert_to_fp32(self.model_forward(*args, **kwargs))
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\amp\autocast_mode.py", line 14, in decorate_autocast
    return func(*args, **kwargs)
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 582, in forward    sample, res_samples = downsample_block(
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 837, in forward
    hidden_states = attn(
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\transformer_2d.py", line 265, in forward
    hidden_states = block(
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\attention.py", line 291, in forward
    attn_output = self.attn1(
  File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
TypeError: replace_unet_cross_attn_to_xformers.<locals>.forward_xformers() got an unexpected keyword argument 'encoder_hidden_states'

python 3.10.6

liasece commented 1 year ago

是不是装了其他训练插件?关掉其他训练插件再试,有些插件依赖冲突

kingzeus commented 1 year ago

全新安装的webui,根据https://zhuanlan.zhihu.com/p/611173551 使用了 pip install xformers==0.0.16rc425

liasece commented 1 year ago

webui 启动时的日志发出来看看,完整的日志

kingzeus commented 1 year ago

`Microsoft Windows [Version 10.0.19044.1348] (c) Microsoft Corporation. All rights reserved.

D:\ai\stable-diffusion-webui>webui --xformers --no-gradio-queue check_pip start_venv venv "D:\ai\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 | packaged by conda-forge | (main, Oct 24 2022, 16:02:16) [MSC v.1916 64 bit (AMD64)] Commit hash: 22bcc7be428c94e9408f589966c2040187245d81 Installing requirements for Web UI

Installing sd-webui-train-tools requirement: D:\ai\stable-diffusion-webui\extensions\sd-webui-train-tools\requirements.txt

If submitting an issue on github, please provide the full startup log for debugging purposes.

Initializing Dreambooth Dreambooth revision: 926ae204ef5de17efca2059c334b6098492a0641 Successfully installed accelerate-0.18.0 diffusers-0.14.0 fastapi-0.94.1 gitpython-3.1.31 requests-2.28.2 transformers-4.26.1

Does your project take forever to startup? Repetitive dependency installation may be the reason. Automatic1111's base project sets strict requirements on outdated dependencies. If an extension is using a newer version, the dependency is uninstalled and reinstalled twice every startup.

Your version of xformers is 0.0.16rc425.

xformers >= 0.0.17.dev is required to be available on the Dreambooth tab.

Torch 1 wheels of xformers >= 0.0.17.dev are no longer available on PyPI,

but you can manually download them by going to:

https://github.com/facebookresearch/xformers/actions

Click on the most recent action tagged with a release (middle column).

Select a download based on your environment.

Unzip your download

Activate your venv and install the wheel: (from A1111 project root)

cd venv/Scripts activate pip install {REPLACE WITH PATH TO YOUR UNZIPPED .whl file}

Then restart your project.

[!] xformers version 0.0.16rc425 installed. [+] torch version 1.13.1+cu117 installed. [+] torchvision version 0.14.1+cu117 installed. [+] accelerate version 0.18.0 installed. [+] diffusers version 0.14.0 installed. [+] transformers version 4.26.1 installed. [+] bitsandbytes version 0.35.4 installed.

Launching Web UI with arguments: --xformers --no-gradio-queue Civitai Helper: Get Custom Model Folder Civitai Helper: Load setting from: D:\ai\stable-diffusion-webui\extensions\Stable-Diffusion-Webui-Civitai-Helper\setting.json Loading weights [d725be5d18] from D:\ai\stable-diffusion-webui\models\Stable-diffusion\revAnimated_v11.safetensors Creating model from config: D:\ai\stable-diffusion-webui\configs\v1-inference.yaml LatentDiffusion: Running in eps-prediction mode DiffusionWrapper has 859.52 M params. Applying xformers cross attention optimization. Textual inversion embeddings loaded(0): Model loaded in 137.4s (load weights from disk: 0.2s, create model: 0.4s, apply weights to model: 134.6s, apply half(): 0.5s, move model to device: 0.7s, load textual inversion embeddings: 0.9s). [VRAMEstimator] Loaded benchmark data. CUDA SETUP: Loading binary D:\ai\stable-diffusion-webui\venv\lib\site-packages\bitsandbytes\libbitsandbytes_cudaall.dll... Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Startup time: 155.0s (import torch: 5.4s, import gradio: 1.1s, import ldm: 0.6s, other imports: 1.3s, load scripts: 3.7s, load SD checkpoint: 137.7s, create ui: 4.9s, gradio launch: 0.1s). Train Tools: get_project_list: outputs\train_tools\projects\meta_boy Train Tools: get_project_version_list: outputs\train_tools\projects\meta_boy\versions\test Train Tools: get_project_version_list: outputs\train_tools\projects\meta_boy\versions\v1 Train Tools: ui_refresh_project: ['meta_boy'] Train Tools: on_ui_change_project_version_click: meta_boy v1 Train Tools: on_ui_change_project_click: meta_boy Train Tools: get_project_version_list: outputs\train_tools\projects\meta_boy\versions\test Train Tools: get_project_version_list: outputs\train_tools\projects\meta_boy\versions\v1 Train Tools: on_train_begin_click {'base_model': 'D:\ai\stable-diffusion-webui\models\Stable-diffusion\revAnimated_v11.safetensors', 'img_folder': 'D:\ai\stable-diffusion-webui\outputs\train_tools\projects\meta_boy\versions\v1\dataset\processed', 'output_folder': 'D:\ai\stable-diffusion-webui\outputs\train_tools\projects\meta_boy\versions\v1\trains\revAnimated_v11-bs-1-ep-20-op-AdamW8bit-lr-0_0001-net-128-ap-64', 'save_json_folder': None, 'save_json_name': None, 'load_json_path': None, 'multi_run_folder': None, 'reg_img_folder': None, 'sample_prompts': None, 'change_output_name': 'meta_boy-v1', 'json_load_skip_list': None, 'training_comment': None, 'save_json_only': False, 'tag_occurrence_txt_file': True, 'sort_tag_occurrence_alphabetically': False, 'optimizer_type': 'AdamW8bit', 'optimizer_args': {'weight_decay': '0.1', 'betas': '0.9,0.99'}, 'scheduler': 'cosine', 'cosine_restarts': 1, 'scheduler_power': 1, 'learning_rate': 0.0001, 'unet_lr': None, 'text_encoder_lr': None, 'warmup_lr_ratio': None, 'unet_only': False, 'net_dim': 128, 'alpha': 64, 'train_resolution': 512, 'height_resolution': None, 'batch_size': 1, 'clip_skip': 2, 'test_seed': 23, 'mixed_precision': 'fp16', 'save_precision': 'fp16', 'lyco': False, 'network_args': None, 'num_epochs': 20, 'save_every_n_epochs': 2, 'save_n_epoch_ratio': None, 'save_last_n_epochs': None, 'max_steps': None, 'sample_sampler': None, 'sample_every_n_steps': None, 'sample_every_n_epochs': None, 'buckets': True, 'min_bucket_resolution': 320, 'max_bucket_resolution': 960, 'bucket_reso_steps': None, 'bucket_no_upscale': False, 'shuffle_captions': False, 'keep_tokens': None, 'xformers': True, 'cache_latents': True, 'flip_aug': False, 'v2': False, 'v_parameterization': False, 'gradient_checkpointing': False, 'gradient_acc_steps': None, 'noise_offset': None, 'mem_eff_attn': False, 'lora_model_for_resume': None, 'save_state': False, 'resume': None, 'text_only': False, 'vae': None, 'log_dir': None, 'log_prefix': None, 'tokenizer_cache_dir': None, 'dataset_config': None, 'lowram': False, 'no_meta': False, 'color_aug': False, 'random_crop': False, 'use_8bit_adam': False, 'use_lion': False, 'caption_dropout_rate': None, 'caption_dropout_every_n_epochs': None, 'caption_tag_dropout_rate': None, 'prior_loss_weight': 1, 'max_grad_norm': 1, 'save_as': 'safetensors', 'caption_extension': '.txt', 'max_clip_token_length': 150, 'save_last_n_epochs_state': None, 'num_workers': 8, 'persistent_workers': True, 'face_crop_aug_range': None, 'network_module': 'networks.lora', 'locon_dim': None, 'locon_alpha': None, 'locon': False} D:\ai\stable-diffusion-webui\outputs\train_tools\projects\meta_boy\versions\v1\dataset\processed 51_meta_boy Created a txt file named meta_boy-v1.txt in the output folder prepare tokenizer update token length: 150 Use DreamBooth method. prepare images. found directory D:\ai\stable-diffusion-webui\outputs\train_tools\projects\meta_boy\versions\v1\dataset\processed\51_meta_boy contains 40 image files 2040 train images with repeating. 0 reg images. no regularization images / 正則化画像が見つかりませんでした [Dataset 0] batch_size: 1 resolution: (512, 512) enable_bucket: True min_bucket_reso: 320 max_bucket_reso: 960 bucket_reso_steps: 64 bucket_no_upscale: False

[Subset 0 of Dataset 0] image_dir: "D:\ai\stable-diffusion-webui\outputs\train_tools\projects\meta_boy\versions\v1\dataset\processed\51_meta_boy" image_count: 40 num_repeats: 51 shuffle_caption: False keep_tokens: 0 caption_dropout_rate: 0.0 caption_dropout_every_n_epoches: 0 caption_tag_dropout_rate: 0.0 color_aug: False flip_aug: False face_crop_aug_range: None random_crop: False is_reg: False class_tokens: meta_boy caption_extension: .txt

[Dataset 0] loading image sizes. 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:00<00:00, 4960.15it/s] make buckets number of images (including repeats) / 各bucketの画像枚数(繰り返し回数を含む) bucket 0: resolution (512, 512), count: 1020 mean ar error (without repeats): 0.0 prepare accelerator Using accelerator 0.15.0 or above. load StableDiffusion checkpoint loading u-net: loading vae: loading text encoder: Replace CrossAttention.forward to use xformers [Dataset 0] caching latents. 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:04<00:00, 4.70it/s] import network module: networks.lora create LoRA network. base dim (rank): 128, alpha: 64.0 create LoRA for Text Encoder: 72 modules. create LoRA for U-Net: 192 modules. enable LoRA for text encoder enable LoRA for U-Net prepare optimizer, data loader etc. use 8-bit AdamW optimizer | {'weight_decay': 0.1, 'betas': (0.9, 0.99)} override steps. steps for 20 epochs is / 指定エポックまでのステップ数: 20400 running training / 学習開始 num train images repeats / 学習画像の数×繰り返し回数: 2040 num reg images / 正則化画像の数: 0 num batches per epoch / 1epochのバッチ数: 1020 num epochs / epoch数: 20 batch size per device / バッチサイズ: 1 gradient accumulation steps / 勾配を合計するステップ数 = 1 total optimization steps / 学習ステップ数: 20400 steps: 0%| | 0/20400 [00:00<?, ?it/s]epoch 1/20 A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' Train Tools: train.train error replace_unet_cross_attn_to_xformers..forward_xformers() got an unexpected keyword argument 'encoder_hidden_states' Traceback (most recent call last): File "D:\ai\stable-diffusion-webui\extensions\sd-webui-train-tools\liasece_sd_webui_train_tools\train_ui.py", line 121, in on_train_begin_click train.train(cfg) File "D:\ai\stable-diffusion-webui\extensions\sd-webui-train-tools\liasece_sd_webui_train_tools\train.py", line 68, in train train_network.train(args) File "D:\ai\stable-diffusion-webui\extensions\sd-webui-train-tools\liasece_sd_webui_train_tools\sd_scripts\train_network.py", line 538, in train noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\accelerate\utils\operations.py", line 495, in call return convert_to_fp32(self.model_forward(*args, *kwargs)) File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\amp\autocast_mode.py", line 14, in decorate_autocast return func(args, kwargs) File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 582, in forward sample, res_samples = downsample_block( File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, kwargs) File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 837, in forward hidden_states = attn( File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, *kwargs) File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\transformer_2d.py", line 265, in forward hidden_states = block( File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(input, kwargs) File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\attention.py", line 291, in forward attn_output = self.attn1( File "D:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1194, in _call_impl return forward_call(*input, **kwargs) TypeError: replace_unet_cross_attn_to_xformers..forward_xformers() got an unexpected keyword argument 'encoder_hidden_states'

steps: 0%| | 0/20400 [00:58<?, ?it/s] `

liasece commented 1 year ago

你是不是装了 Dreambooth 插件

liasece commented 1 year ago

禁用 Dreambooth 插件再试 https://github.com/liasece/sd-webui-train-tools/issues/6

kingzeus commented 1 year ago

禁用 Dreambooth 插件之后就正常了