AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
136.05k stars 25.96k forks source link

[Bug]: NotImplementedError #15516

Open An0m3l1ss opened 3 months ago

An0m3l1ss commented 3 months ago

Checklist

What happened?

Something's broken in webui-user. I tried to roll back the stable diff version to the old version 1.8.0. by rewriting the git command. Didn't work. Now the program stopped generating images. The problem is definitely not in the computer. But the error appears. In git pull origin master I wrote the command git reset --hard v1.8.0 and everything broke for me. Tell me, has anyone figured out how to fix this error?

image

Steps to reproduce the problem

NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(2, 4096, 8, 40) (torch.float16) key : shape=(2, 4096, 8, 40) (torch.float16) value : shape=(2, 4096, 8, 40) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see python -m xformers.info for more info flshattF@0.0.0 is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info for more info tritonflashattF is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info for more info triton is not available cutlassF is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info for more info smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) operator wasn't built - see python -m xformers.info for more info unsupported embed per head: 40

What should have happened?

Tried to roll back the program to the correct version. It made it even worse.

What browsers do you use to access the UI ?

No response

Sysinfo

No.

Console logs

venv "venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: 1.8.0-RC
Commit hash: bef51aed032c0aaa5cfd80445bc4cf0d85b408b5
Launching Web UI with arguments: --xformers --autolaunch --theme dark
WARNING:xformers:WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
    PyTorch 2.1.2+cu121 with CUDA 1201 (you have 2.0.1+cu118)
    Python  3.10.11 (you have 3.10.9)
  Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
  Memory-efficient attention, SwiGLU, sparse and more won't be available.
  Set XFORMERS_MORE_DETAILS=1 for more details
==============================================================================
You are running torch 2.0.1+cu118.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
[-] ADetailer initialized. version: 24.3.1, num models: 10
ControlNet preprocessor location: D:\SDP\stable-diffusion-portable-main\extensions\sd-webui-controlnet\annotator\downloads
2024-04-14 23:22:40,873 - ControlNet - INFO - ControlNet v1.1.441
2024-04-14 23:22:40,952 - ControlNet - INFO - ControlNet v1.1.441
Loading weights [bfb82d76c7] from D:\SDP\stable-diffusion-portable-main\models\Stable-diffusion\949bb26a4c989cbf387d10c62c6e0fac.safetensors
[LyCORIS]-WARNING: LyCORIS legacy extension is now loaded, if you don't expext to see this message, please disable this extension.
2024-04-14 23:22:41,249 - ControlNet - INFO - ControlNet UI callback registered.
*** Error executing callback ui_tabs_callback for D:\SDP\stable-diffusion-portable-main\extensions\sd-webui-depth-lib\scripts\main.py
    Traceback (most recent call last):
      File "D:\SDP\stable-diffusion-portable-main\modules\script_callbacks.py", line 180, in ui_tabs_callback
        res += c.callback() or []
      File "D:\SDP\stable-diffusion-portable-main\extensions\sd-webui-depth-lib\scripts\main.py", line 47, in on_ui_tabs        dataset = gr.Examples(examples=os.path.join(maps_path, t), inputs=[png_input_area],examples_per_page=24,label="Depth Maps", elem_id="examples")
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\helpers.py", line 58, in create_examples        examples_obj = Examples(
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\helpers.py", line 209, in __init__
        self.processed_examples = [
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\helpers.py", line 210, in <listcomp>
        [
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\helpers.py", line 211, in <listcomp>
        component.postprocess(sample)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio\components\image.py", line 318, in postprocess
        return client_utils.encode_url_or_file_to_base64(y)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio_client\utils.py", line 387, in encode_url_or_file_to_base64
        return encode_file_to_base64(path)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\gradio_client\utils.py", line 360, in encode_file_to_base64
        with open(f, "rb") as file:
    PermissionError: [Errno 13] Permission denied: 'tmp'

---
Creating model from config: D:\SDP\stable-diffusion-portable-main\configs\v1-inference.yaml
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 10.4s (prepare environment: 2.1s, import torch: 2.7s, import gradio: 0.8s, setup paths: 0.7s, initialize shared: 0.2s, other imports: 0.4s, load scripts: 2.6s, create ui: 0.4s, gradio launch: 0.4s).
Loading VAE weights specified in settings: D:\SDP\stable-diffusion-portable-main\models\VAE\vae-ft-ema-560000-ema-pruned.safetensors
Applying attention optimization: xformers... done.
Model loaded in 4.5s (load weights from disk: 0.4s, create model: 0.5s, apply weights to model: 1.6s, load VAE: 1.0s, calculate empty prompt: 0.8s).
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]
*** Error completing request
*** Arguments: ('task(bcn55hk1gljk7yp)', <gradio.routes.Request object at 0x000001F1B68AF6A0>, 'dog', '(deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, (mutated hands and fingers:1.4), disconnected limbs, mutation, mutated, ugly, disgusting, blurry, amputation', [], 20, 'Euler a', 1, 1, 8, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], 0, False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'DPM++ 2M Karras', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, '', 0, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), False, True, 3, 4, 0.15, 0.3, 'bicubic', 0.5, 2, True, False, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, 50) {}
    Traceback (most recent call last):
      File "D:\SDP\stable-diffusion-portable-main\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "D:\SDP\stable-diffusion-portable-main\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\modules\txt2img.py", line 110, in txt2img
        processed = processing.process_images(p)
      File "D:\SDP\stable-diffusion-portable-main\modules\processing.py", line 785, in process_images
        res = process_images_inner(p)
      File "D:\SDP\stable-diffusion-portable-main\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\modules\processing.py", line 921, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "D:\SDP\stable-diffusion-portable-main\modules\processing.py", line 1257, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "D:\SDP\stable-diffusion-portable-main\modules\sd_samplers_kdiffusion.py", line 234, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\SDP\stable-diffusion-portable-main\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "D:\SDP\stable-diffusion-portable-main\modules\sd_samplers_kdiffusion.py", line 234, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
        x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
        return self.inner_model.apply_model(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\modules\sd_hijack_utils.py", line 18, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "D:\SDP\stable-diffusion-portable-main\modules\sd_hijack_utils.py", line 32, in __call__
        return self.__orig_func(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\modules\sd_unet.py", line 91, in UNetModel_forward
        return original_forward(self, x, timesteps, context, *args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
        x = layer(x, context)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
        x = block(x, context=context[i])
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
        return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
      File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
        return CheckpointFunction.apply(func, len(inputs), *args)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply        return super().apply(*args, **kwargs)  # type: ignore[misc]
      File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
        output_tensors = ctx.run_function(*ctx.input_tensors)
      File "D:\SDP\stable-diffusion-portable-main\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
        x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "D:\SDP\stable-diffusion-portable-main\modules\sd_hijack_optimizations.py", line 496, in xformers_attention_forward
        out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None, op=get_xformers_flash_attention_op(q, k, v))
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 223, in memory_efficient_attention
        return _memory_efficient_attention(
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 321, in _memory_efficient_attention
        return _memory_efficient_attention_forward(
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 337, in _memory_efficient_attention_forward
        op = _dispatch_fw(inp, False)
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 120, in _dispatch_fw
        return _run_priority_list(
      File "D:\SDP\stable-diffusion-portable-main\venv\lib\site-packages\xformers\ops\fmha\dispatch.py", line 63, in _run_priority_list
        raise NotImplementedError(msg)
    NotImplementedError: No operator found for `memory_efficient_attention_forward` with inputs:
         query       : shape=(2, 4096, 8, 40) (torch.float16)
         key         : shape=(2, 4096, 8, 40) (torch.float16)
         value       : shape=(2, 4096, 8, 40) (torch.float16)
         attn_bias   : <class 'NoneType'>
         p           : 0.0
    `decoderF` is not supported because:
        xFormers wasn't build with CUDA support
        attn_bias type is <class 'NoneType'>
        operator wasn't built - see `python -m xformers.info` for more info
    `flshattF@0.0.0` is not supported because:
        xFormers wasn't build with CUDA support
        operator wasn't built - see `python -m xformers.info` for more info
    `tritonflashattF` is not supported because:
        xFormers wasn't build with CUDA support
        operator wasn't built - see `python -m xformers.info` for more info
        triton is not available
    `cutlassF` is not supported because:
        xFormers wasn't build with CUDA support
        operator wasn't built - see `python -m xformers.info` for more info
    `smallkF` is not supported because:
        max(query.shape[-1] != value.shape[-1]) > 32
        xFormers wasn't build with CUDA support
        dtype=torch.float16 (supported: {torch.float32})
        operator wasn't built - see `python -m xformers.info` for more info
        unsupported embed per head: 40

---

Additional information

Everything was working fine just yesterday. Today the version has been updated. I don't know how to rollback. Only made things worse.

AG-w commented 3 months ago

what if you download 1.8.0 from github then run it in an empty folder so it reinstall ?