d8ahazard / sd_dreambooth_extension

Other
1.85k stars 283 forks source link

[Bug]: The deprecation tuple ('LoRAAttnProcessor2_0', '0.26.0', 'Make sure use AttnProcessor2_0 instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`') should be removed since diffusers' version 0.26.1 is >= 0.26.0 #1456

Closed whomm closed 3 months ago

whomm commented 5 months ago

Is there an existing issue for this?

What happened?

when start train

Output Exception training model: 'The deprecation tuple ('LoRAAttnProcessor', '0.26.0', 'Make sure use AttnProcessor instead by settingLoRA layers to self.{to_q,to_k,to_v,to_out[0]}.lora_layer respectively. This will be done automatically when using LoraLoaderMixin.load_lora_weights') should be removed since diffusers' version 0.26.1 is >= 0.26.0'.

Steps to reproduce the problem

  1. Go to ....
  2. Press ....
  3. ...

Commit and libraries

sd_dreambooth_extension download master zip

Command Line Arguments

none

Console logs

Preparing Dataset (With Caching)| 0/1 [00:00<?, ?it/s]
Bucket 0 (512, 512, 0) - Instance Images: 110 | Class Images: 0 | Max Examples/batch: 110
Total Buckets 1 - Instance Images: 110 | Class Images: 0 | Max Examples/batch: 110

Total images / batch: 110, total examples: 1100/110 [00:00<00:00, 121000.12it/s]
                  Initializing bucket counter!
Steps:   0%|          | 0/11000 [00:00<?, ?it/s]Traceback (most recent call last):
  File "/data/workdata/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/ui_functions.py", line 735, in start_training
    result = main(class_gen_method=class_gen_method)
  File "/data/workdata/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 1991, in main
    return inner_loop()
  File "/data/workdata/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/memory.py", line 126, in decorator
    return function(batch_size, grad_size, prof, *args, **kwargs)
  File "/data/workdata/stable-diffusion-webui/extensions/sd_dreambooth_extension/dreambooth/train_dreambooth.py", line 1732, in inner_loop
    model_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/accelerate/utils/operations.py", line 581, in forward
    return model_forward(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/accelerate/utils/operations.py", line 569, in __call__
    return convert_to_fp32(self.model_forward(*args, **kwargs))
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/amp/autocast_mode.py", line 16, in decorate_autocast
    return func(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_condition.py", line 1121, in forward
    sample, res_samples = downsample_block(
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/diffusers/models/unets/unet_2d_blocks.py", line 1189, in forward
    hidden_states = attn(
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/diffusers/models/transformers/transformer_2d.py", line 379, in forward
    hidden_states = torch.utils.checkpoint.checkpoint(
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/_compile.py", line 24, in inner
    return torch._dynamo.disable(fn, recursive)(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 489, in _fn
    return fn(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/_dynamo/external_utils.py", line 17, in inner
    return fn(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/utils/checkpoint.py", line 489, in checkpoint
    ret = function(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/diffusers/models/transformers/transformer_2d.py", line 374, in custom_forward
    return module(*inputs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/diffusers/models/attention.py", line 329, in forward
    attn_output = self.attn1(
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 512, in forward
    return self.processor(
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 1856, in __call__
    deprecate(
  File "/home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/diffusers/utils/deprecation_utils.py", line 18, in deprecate
    raise ValueError(
ValueError: The deprecation tuple ('LoRAAttnProcessor', '0.26.0', 'Make sure use AttnProcessor instead by settingLoRA layers to `self.{to_q,to_k,to_v,to_out[0]}.lora_layer` respectively. This will be done automatically when using `LoraLoaderMixin.load_lora_weights`') should be removed since diffusers' version 0.26.1 is >= 0.26.0
Steps:   0%|          | 0/11000 [00:00<?, ?it/s]
Duration: 00:00:22
Duration: 00:00:23

Additional information

No response

Disastorm commented 5 months ago

I get this error too. Recently entirely recreated my entire venv on windows with --xformers tag as well.

daszzzpg commented 5 months ago

I get this error today as well Someone care to explain?

warchildmd commented 5 months ago

Go to /home/work/miniconda3/envs/py3.10.6/lib/python3.10/site-packages/diffusers/utils/deprecation_utils.py and comment out lines 17-21. Should fix the problem until they release a fixed version:

# if version.parse(version.parse(__version__).base_version) >= version.parse(version_name) and False:
#     raise ValueError(
#         f"The deprecation tuple {(attribute, version_name, message)} should be removed since diffusers'"
#         f" version {__version__} is >= {version_name}"
#     )
github-actions[bot] commented 4 months ago

This issue is stale because it has been open for 14 days with no activity. Remove stale label or comment or this will be closed in 30 days