dvruette / sd-webui-fabric

MIT License
403 stars 23 forks source link

ValueError: not enough values to unpack when using --xformers #24

Closed h3rmit-git closed 1 year ago

h3rmit-git commented 1 year ago

I am getting the following error when running FABRIC with --xformers.

The error goes away with --opt-split-attention instead of --xformers.

Versions:

Traceback (most recent call last):
  File "E:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "E:\stable-diffusion-webui\modules\processing.py", line 611, in process_images
    res = process_images_inner(p)
  File "E:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "E:\stable-diffusion-webui\modules\processing.py", line 729, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\marking.py", line 28, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\processing.py", line 977, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 143, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict([cond_in[a:b]], image_cond_in[a:b]))
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 105, in new_forward
    pos_latents, neg_latents = get_latents_from_params(p, params, w, h)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 46, in get_latents_from_params
    params.pos_latents = get_latents(params.pos_images, params.pos_latents)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 36, in get_latents
    return [encode_to_latent(p, img, w, h) for img in images]
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 36, in <listcomp>
    return [encode_to_latent(p, img, w, h) for img in images]
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 24, in encode_to_latent
    vae_output = p.sd_model.encode_first_stage(x)
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage
    return self.first_stage_model.encode(x)
  File "E:\stable-diffusion-webui\modules\lowvram.py", line 48, in first_stage_model_encode_wrap
    return first_stage_model_encode(x)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode
    h = self.encoder(x)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\extensions\multidiffusion-upscaler-for-automatic1111\scripts\vae_optimize.py", line 379, in __call__
    return self.net.original_forward(x)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 536, in forward
    h = self.mid.attn_1(h)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 258, in forward
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 36, in patched_xformers_attn
    bs, nq, nh, dh = q.shape  # batch_size, num_queries, num_heads, dim_per_head

ValueError: not enough values to unpack (expected 4, got 3)
dvruette commented 1 year ago

I've been struggling getting xformers to work in the past, but I believe that this is an error I haven't seen before, I'll look into. In general, I recommend --opt-split-attention when using FABRIC, especially if using > 2 feedback images as even xformers struggles with VRAM in those cases. I'll also be looking into improving performance (speed, memory) using token mergin or similar in the near future.

dvruette commented 1 year ago

I seem to be unable to reproduce the error above. It seems to occur in the VAE forward pass while multi-diffusion is enabled, but it doesn't happen for me even with the respective plugin enabled. Could you share some generation parameters to help reproduce the issue?

h3rmit-git commented 1 year ago

Below is a stacktrace without Multidiffusion.

I believe the error may be related to the version of something.

Stability AI version: https://github.com/Stability-AI/stablediffusion/commit/cf1d67a6fd5ea1aa600c4df58e5b47da45f6bdbf (Mar 25, 2023)

COMMANDLINE_ARGS=--disable-safe-unpickle --medvram --xformers --no-half-vae

The option --opt-split-attention unfortunately makes me run out of VRAM even when not using FABRIC. I'm running on 4GB of VRAM.

Traceback (most recent call last):
  File "E:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "E:\stable-diffusion-webui\modules\processing.py", line 611, in process_images
    res = process_images_inner(p)
  File "E:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "E:\stable-diffusion-webui\modules\processing.py", line 729, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\marking.py", line 28, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\processing.py", line 977, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 143, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict([cond_in[a:b]], image_cond_in[a:b]))
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 105, in new_forward
    pos_latents, neg_latents = get_latents_from_params(p, params, w, h)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 46, in get_latents_from_params
    params.pos_latents = get_latents(params.pos_images, params.pos_latents)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 36, in get_latents
    return [encode_to_latent(p, img, w, h) for img in images]
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 36, in <listcomp>
    return [encode_to_latent(p, img, w, h) for img in images]
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 24, in encode_to_latent
    vae_output = p.sd_model.encode_first_stage(x)
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage
    return self.first_stage_model.encode(x)
  File "E:\stable-diffusion-webui\modules\lowvram.py", line 48, in first_stage_model_encode_wrap
    return first_stage_model_encode(x)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode
    h = self.encoder(x)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 536, in forward
    h = self.mid.attn_1(h)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 258, in forward
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=self.attention_op)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 36, in patched_xformers_attn
    bs, nq, nh, dh = q.shape  # batch_size, num_queries, num_heads, dim_per_head
dvruette commented 1 year ago

I unfortunately still can't reproduce the issue on my end. I tried pushing a blind fix based on the error message, but it's pretty much a shot in the dark at this point. Could you maybe provide the specific steps that lead to the error? Does it happen immediately or only after a few generations?

h3rmit-git commented 1 year ago

The error happens immediately and always. The steps I am following are:

  1. Launch WebUI with --xformers.
  2. Leave everything at their default value and write no prompts.
  3. Upload an image in FABRIC and Like it.
  4. Generate an image.

After your last two commits that attempted to fix the issue, I am getting some new errors:

First run stacktrace:

Traceback (most recent call last):
  File "E:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "E:\stable-diffusion-webui\modules\processing.py", line 611, in process_images
    res = process_images_inner(p)
  File "E:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "E:\stable-diffusion-webui\modules\processing.py", line 729, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\marking.py", line 28, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\processing.py", line 977, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 143, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict([cond_in[a:b]], image_cond_in[a:b]))
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 199, in new_forward
    out = self._fabric_old_forward(x, timesteps, context, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 178, in patched_attn1_forward
    out_cond = weighted_attention(attn1, attn1._fabric_old_forward, x_cond, ctx_cond, ws, **kwargs)  # (n_cond, seq, dim)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 163, in weighted_attention
    assert _xformers_attn in locals() or _xformers_attn in globals(), "xformers attention function not found"
AssertionError: xformers attention function not found

Second run stacktrace:

Traceback (most recent call last):
  File "E:\stable-diffusion-webui\modules\call_queue.py", line 57, in f
    res = list(func(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\txt2img.py", line 57, in txt2img
    processed = processing.process_images(p)
  File "E:\stable-diffusion-webui\modules\processing.py", line 611, in process_images
    res = process_images_inner(p)
  File "E:\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
    return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
  File "E:\stable-diffusion-webui\modules\processing.py", line 729, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\marking.py", line 28, in process_sample
    return process.sample_before_CN_hack(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\processing.py", line 977, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 257, in launch_sampling
    return func()
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 383, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 143, in forward
    x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict([cond_in[a:b]], image_cond_in[a:b]))
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "E:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "E:\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
    result = forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 156, in new_forward
    _ = self._fabric_old_forward(zs, ts, ctx)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
    x = layer(x, context)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
    x = block(x, context=context[i])
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "E:\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
    x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
  File "E:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 140, in patched_attn1_forward
    out = attn1._fabric_old_forward(x, **kwargs)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 178, in patched_attn1_forward
    out_cond = weighted_attention(attn1, attn1._fabric_old_forward, x_cond, ctx_cond, ws, **kwargs)  # (n_cond, seq, dim)
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 154, in weighted_attention
    if is_the_same(attn_fn, split_cross_attention_forward_invokeAI):
  File "E:\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 147, in is_the_same
    return fn1.__name__ == fn2.__name__ and fn1.__module__ == fn2.__module__
AttributeError: 'functools.partial' object has no attribute '__name__'
h3rmit-git commented 1 year ago

I've been struggling getting xformers to work in the past, but I believe that this is an error I haven't seen before, I'll look into.

I also had a look at how the Tiled Diffusion plugin does the hijacking of the attention methods, including xformers. Having used it for months in thousands of image generations without any problems, I consider this plugin to be very robust. Please have a look at the following. It's an interesting alternative hijacking approach that may provide some insights and help solve other issues as well:

https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111/blob/main/tile_utils/attn.py

https://github.com/pkuliyi2015/multidiffusion-upscaler-for-automatic1111/blob/main/scripts/tilediffusion.py

dvruette commented 1 year ago

Ok, I've pretty much overhauled the entire weighted attention code using a similar approach to the code you linked, which should hopefully make it more robust. Unfortunately, I still didn't have any luck reproducing your error, but I'm more confident now that it should be fixed now.

Lembont commented 1 year ago

I had the same error but updating the script seems to have fixed it ! Thank you so much !

h3rmit-git commented 1 year ago

It works!

Unlike with --opt-split-attention, which can only do 256x256 images with 4GB of VRAM, I can now generate 512x512 images with --xformers and 4 Liked images!

However, there's still one one more bug with the patching/unpatching process. In case of an error during the generation (e.g. out of memory), the next generation enters an infinite loop (100% CPU, 0% GPU, no progress).

dvruette commented 1 year ago

Awesome! I think I know what could cause the issue you’re describing, although I thought it was being handled correctly. Should be easier to reproduce as well, will have a look.

h3rmit-git commented 1 year ago

Thanks!

On the same topic of patching, there's also always a message about Fabric restoring the original U-Net, even when it's disabled. This should be relevant to the bug.


100%|██████████████████████████████████████████| 20/20 [00:56<00:00,  2.80s/it]
[FABRIC] Restoring original U-Net forward pass█| 20/20 [00:27<00:00,  1.46s/it]
Total progress: 100%|██████████████████████████| 20/20 [00:35<00:00,  1.77s/it]
100%|██████████████████████████████████████████| 20/20 [00:31<00:00,  1.60s/it]
[FABRIC] Restoring original U-Net forward pass█| 20/20 [00:27<00:00,  1.46s/it]
Total progress: 100%|██████████████████████████| 20/20 [00:33<00:00,  1.67s/it]
100%|██████████████████████████████████████████| 20/20 [00:32<00:00,  1.60s/it]
[FABRIC] Restoring original U-Net forward pass█| 20/20 [00:27<00:00,  1.47s/it]
Total progress: 100%|██████████████████████████| 20/20 [00:33<00:00,  1.66s/it]
100%|██████████████████████████████████████████| 20/20 [00:35<00:00,  1.77s/it]
[FABRIC] Restoring original U-Net forward pass█| 20/20 [00:31<00:00,  1.45s/it]
Total progress: 100%|██████████████████████████| 20/20 [00:36<00:00,  1.84s/it]
Total progress: 100%|██████████████████████████| 20/20 [00:36<00:00,  1.45s/it]```
h3rmit-git commented 1 year ago

I managed to fix the infinite loop issue and opened a Pull Request: https://github.com/dvruette/sd-webui-fabric/pull/28

Edit: I also added a fix about the above U-Net restoration message in the same Pull Request.