dvruette / sd-webui-fabric

MIT License
403 stars 23 forks source link

Big Error. FABRIC does not work. #29

Closed Gushousekai195 closed 1 year ago

Gushousekai195 commented 1 year ago
Traceback (most recent call last):
      File "C:\Users\mattb\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\mattb\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\mattb\stable-diffusion-webui\modules\processing.py", line 722, in process_images
        res = process_images_inner(p)
      File "C:\Users\mattb\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\modules\processing.py", line 857, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\mattb\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\marking.py", line 29, in process_sample
        return process.sample_before_CN_hack(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\modules\processing.py", line 1130, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Users\mattb\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 231, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\mattb\stable-diffusion-webui\modules\sd_samplers_common.py", line 250, in launch_sampling
        return func()
      File "C:\Users\mattb\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 231, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 626, in sample_dpmpp_2m_sde
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 188, in forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "C:\Users\mattb\stable-diffusion-webui\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 229, in new_forward
        out = self._fabric_old_forward(x, timesteps, context, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 799, in forward
        h = self.middle_block(h, emb, context)
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "C:\Users\mattb\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "C:\Users\mattb\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
        x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 213, in patched_attn1_forward
        out_cond = attention_with_feedback(x[cond_ids], context[cond_ids], cached_hs[:num_pos], pos_weight)  # (n_cond, seq, dim)
      File "C:\Users\mattb\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\patching.py", line 209, in attention_with_feedback
        return weighted_attention(attn1, attn1._fabric_old_forward, _x, ctx, weights, **kwargs)  # (n_cond, seq, dim)
      File "C:\Users\mattb\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 62, in weighted_attention
        return weighted_attn_fn(self, x, context=context, weights=weights, **kwargs)
      File "C:\Users\mattb\stable-diffusion-webui\extensions\sd-webui-fabric\scripts\weighted_attention.py", line 197, in weighted_xformers_attention_forward
        out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=attn_bias, op=get_xformers_flash_attention_op(q, k, v))
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memory_efficient_attention
        return _memory_efficient_attention(
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _memory_efficient_attention
        return _memory_efficient_attention_forward(
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 306, in _memory_efficient_attention_forward
        op = _dispatch_fw(inp)
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 94, in _dispatch_fw
        return _run_priority_list(
      File "C:\Users\mattb\stable-diffusion-webui\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 69, in _run_priority_list
        raise NotImplementedError(msg)
    NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
         query       : shape=(1, 165, 8, 160) (torch.float32)
         key         : shape=(1, 330, 8, 160) (torch.float32)
         value       : shape=(1, 330, 8, 160) (torch.float32)
         attn_bias   : <class 'torch.Tensor'>
         p           : 0.0
    `cutlassF` is not supported because:
        attn_bias.stride(-2) % 4 != 0 (attn_bias.stride() = (330, 0, 0, 1))
        HINT: To use an `attn_bias` with a sequence length that is not a multiple of 8, you need to ensure memory is aligned by slicing a bigger tensor. Example: use `attn_bias = torch.zeros([1, 1, 5, 8])[:,:,:,:5]` instead of `torch.zeros([1, 1, 5, 5])`
    `flshattF` is not supported because:
        dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
        max(query.shape[-1] != value.shape[-1]) > 128
        attn_bias type is <class 'torch.Tensor'>
    `tritonflashattF` is not supported because:
        dtype=torch.float32 (supported: {torch.bfloat16, torch.float16})
        max(query.shape[-1] != value.shape[-1]) > 128
        attn_bias type is <class 'torch.Tensor'>
        Operator wasn't built - see `python -m xformers.info` for more info
        triton is not available
        requires A100 GPU
    `smallkF` is not supported because:
        max(query.shape[-1] != value.shape[-1]) > 32
        unsupported embed per head: 160
dvruette commented 1 year ago

Hi Gusho, thanks for your issue. It's very important that you always share the necessary information and steps to reproduce the error you're seeing, as otherwise I have virtually no chance of fixing it.

In this case, I assume that you're running the latest version. If not, make sure to update FABRIC. Further, it seems like you're running with --xformers (which is now the recommended option) and in full-precision. First, I recommend running in half-precision. Second, how many feedback images are you using? Any ToMe settings or other plugins active?

dvruette commented 1 year ago

I think this was caused by some non-standard resolutions (e.g. generating 640x800 px images) and should be fixed in v0.6.2.