liasece / sd-webui-train-tools

The stable diffusion webui training aid extension helps you quickly and visually train models such as Lora.
Other
393 stars 48 forks source link

forward_xformers() got an unexpected keyword argument 'encoder_hidden_states' #6

Closed newmou closed 1 year ago

newmou commented 1 year ago

Can I get some help?Why hasn't it started after 30 minutes?The progress bar hasn't moved at all.

import network module: networks.lora
create LoRA network. base dim (rank): 128, alpha: 64.0
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
enable LoRA for text encoder
enable LoRA for U-Net
prepare optimizer, data loader etc.
use Adam | {'weight_decay': 0.1, 'betas': (0.9, 0.99)}
override steps. steps for 20 epochs is / 指定エポックまでのステップ数: 9600
running training / 学習開始
  num train images * repeats / 学習画像の数×繰り返し回数: 960
  num reg images / 正則化画像の数: 0
  num batches per epoch / 1epochのバッチ数: 480
  num epochs / epoch数: 20
  batch size per device / バッチサイズ: 2
  gradient accumulation steps / 勾配を合計するステップ数 = 1
  total optimization steps / 学習ステップ数: 9600
steps:   0%|                                                                                          | 0/9600 [00:00<?, ?it/s]epoch 1/20
Train Tools: train.train error replace_unet_cross_attn_to_xformers.<locals>.forward_xformers() got an unexpected keyword argument 'encoder_hidden_states'
steps:   0%|                                                                                          | 0/9600 [00:01<?, ?it/s]

maybe the error is critical,but I can't solve it

Train Tools: train.train error replace_unet_cross_attn_to_xformers.<locals>.forward_xformers() got an unexpected keyword argument 'encoder_hidden_states'
liasece commented 1 year ago

What is the output of "Launching Web UI with arguments:" on the command line when you first start webui?

newmou commented 1 year ago

Here is the output from when I started running the webui

Python 3.10.6 | packaged by conda-forge | (main, Aug 22 2022, 20:36:39) [GCC 10.4.0]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI

Installing sd-webui-train-tools requirement: /home1/newm/stable_linux/stable-diffusion-webui/extensions/sd-webui-train-tools/requirements.txt

If submitting an issue on github, please provide the full startup log for debugging purposes.

Initializing Dreambooth
Dreambooth revision: 3324b6ab7fa661cf7d6b5ef186227dc5e8ad1878
Successfully installed accelerate-0.17.1 diffusers-0.14.0 fastapi-0.90.1 gitpython-3.1.31 requests-2.28.2 starlette-0.23.1 transformers-4.26.1

Does your project take forever to startup?
Repetitive dependency installation may be the reason.
Automatic1111's base project sets strict requirements on outdated dependencies.
If an extension is using a newer version, the dependency is uninstalled and reinstalled twice every startup.

[+] xformers version 0.0.18+8e1673b.d20230329 installed.
[+] torch version 1.13.1+cu117 installed.
[+] torchvision version 0.14.1+cu117 installed.
[+] accelerate version 0.17.1 installed.
[+] diffusers version 0.14.0 installed.
[+] transformers version 4.26.1 installed.
[+] bitsandbytes version 0.35.4 installed.

Launching Web UI with arguments: --deepdanbooru --theme dark --xformers --no-half-vae
2023-03-31 22:04:15.173853: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-03-31 22:04:16.146201: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying xformers cross attention optimization.
Textual inversion embeddings loaded(0): 
Model loaded in 6.3s (load weights from disk: 0.3s, create model: 0.6s, apply weights to model: 0.5s, apply half(): 0.4s, load VAE: 3.0s, move model to device: 1.4s).
liasece commented 1 year ago

It seems to be because your version of xformers is too new, can you try pip install xformers==0.0.17?

I submitted a train-tools update that serves to output more error messages, can you please update this plugin and put up the error message?

newmou commented 1 year ago

Ok, please wait for me

newmou commented 1 year ago

This is the new output

create LoRA network. base dim (rank): 128, alpha: 64.0
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 192 modules.
enable LoRA for text encoder
enable LoRA for U-Net
prepare optimizer, data loader etc.
use Lion optimizer | {'weight_decay': 0.1, 'betas': (0.9, 0.99)}
override steps. steps for 1 epochs is / 指定エポックまでのステップ数: 1020
running training / 学習開始
  num train images * repeats / 学習画像の数×繰り返し回数: 1020
  num reg images / 正則化画像の数: 0
  num batches per epoch / 1epochのバッチ数: 1020
  num epochs / epoch数: 1
  batch size per device / バッチサイズ: 1
  gradient accumulation steps / 勾配を合計するステップ数 = 1
  total optimization steps / 学習ステップ数: 1020
steps:   0%|                                                                                          | 0/1020 [00:00<?, ?it/s]epoch 1/1
Train Tools: train.train error replace_unet_cross_attn_to_xformers.<locals>.forward_xformers() got an unexpected keyword argument 'encoder_hidden_states'
Traceback (most recent call last):
  File "/home1/liu/stable_linux/stable-diffusion-webui/extensions/sd-webui-train-tools/liasece_sd_webui_train_tools/train_ui.py", line 121, in on_train_begin_click
    train.train(cfg)
  File "/home1/liu/stable_linux/stable-diffusion-webui/extensions/sd-webui-train-tools/liasece_sd_webui_train_tools/train.py", line 68, in train
    train_network.train(args)
  File "/home1/liu/stable_linux/stable-diffusion-webui/extensions/sd-webui-train-tools/liasece_sd_webui_train_tools/sd_scripts/train_network.py", line 538, in train
    noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/accelerate/utils/operations.py", line 495, in __call__
    return convert_to_fp32(self.model_forward(*args, **kwargs))
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 14, in decorate_autocast
    return func(*args, **kwargs)
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/diffusers/models/unet_2d_condition.py", line 582, in forward
    sample, res_samples = downsample_block(
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/diffusers/models/unet_2d_blocks.py", line 837, in forward
    hidden_states = attn(
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/diffusers/models/transformer_2d.py", line 265, in forward
    hidden_states = block(
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/diffusers/models/attention.py", line 291, in forward
    attn_output = self.attn1(
  File "/home1/liu/anaconda3/envs/webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
    return forward_call(*input, **kwargs)
TypeError: replace_unet_cross_attn_to_xformers.<locals>.forward_xformers() got an unexpected keyword argument 'encoder_hidden_states'

steps:   0%|                                                                                          | 0/1020 [00:01<?, ?it/s]
liasece commented 1 year ago
Initializing Dreambooth
Dreambooth revision: 3324b6ab7fa661cf7d6b5ef186227dc5e8ad1878
Successfully installed accelerate-0.17.1 diffusers-0.14.0 fastapi-0.90.1 gitpython-3.1.31 requests-2.28.2 starlette-0.23.1 transformers-4.26.1

Does your project take forever to startup?
Repetitive dependency installation may be the reason.
Automatic1111's base project sets strict requirements on outdated dependencies.
If an extension is using a newer version, the dependency is uninstalled and reinstalled twice every startup.

[+] xformers version 0.0.18+8e1673b.d20230329 installed.
[+] torch version 1.13.1+cu117 installed.
[+] torchvision version 0.14.1+cu117 installed.
[+] accelerate version 0.17.1 installed.
[+] diffusers version 0.14.0 installed.
[+] transformers version 4.26.1 installed.
[+] bitsandbytes version 0.35.4 installed.

Launching Web UI with arguments: --deepdanbooru --theme dark --xformers --no-half-vae

I think it is possible that the enabling of Dreambooth is causing some dependency versions to break, does this change if you disable the Dreambooth extension?

liasece commented 1 year ago

Seems to have the evidence. When I installed the Dreambooth plugin, I got the same error as you.

Simply put, to prevent training plugins from competing with each other for different versions of diffusers, you need to disable other train plugins before you use that plugin.

newmou commented 1 year ago

Thanks for your help. I disabled Dreambooth and it will work normally

wukan1986 commented 1 year ago

pip install diffusers[torch]==0.10.2

the diffusers=0.14.0 will throw the error