AUTOMATIC1111 / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
143.72k stars 27.04k forks source link

[Bug]: RuntimeError: FIND was unable to find an engine to execute this computation #9552

Open AoeSyL opened 1 year ago

AoeSyL commented 1 year ago

Is there an existing issue for this?

What happened?

RuntimeError: FIND was unable to find an engine to execute this computation Time taken: 0.05sTorch active/reserved: 2093/2106 MiB, Sys VRAM: 2960/15110 MiB (19.59%)

about some env info

CODE: stable-diffusion-webui branch: master GPU:NVIDIA T4

python -V

Python 3.10.10

nvidia-smi

Tue Apr 11 15:48:32 2023
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.91.03 Driver Version: 460.91.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla T4 On | 00000000:00:07.0 Off | 0 | | N/A 43C P0 26W / 70W | 2962MiB / 15109MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 5664 C python 2959MiB | +-----------------------------------------------------------------------------+

pip list

absl-py 1.4.0 accelerate 0.12.0 addict 2.4.0 aenum 3.1.12 aiofiles 23.1.0 aiohttp 3.8.4 aiosignal 1.3.1 altair 4.2.2 antlr4-python3-runtime 4.9.3 anyio 3.6.2 async-timeout 4.0.2 attrs 22.2.0 basicsr 1.4.2 beautifulsoup4 4.12.2 blendmodes 2022 boltons 23.0.0 cachetools 5.3.0 certifi 2022.12.7 chardet 4.0.0 charset-normalizer 3.1.0 clean-fid 0.1.29 click 8.1.3 clip 1.0 cmake 3.26.3 coloredlogs 15.0.1 contourpy 1.0.7 cssselect2 0.7.0 cycler 0.11.0 Cython 0.29.34 deprecation 2.1.0 einops 0.4.1 entrypoints 0.4 facexlib 0.2.5 fastapi 0.94.0 ffmpy 0.3.0 filelock 3.11.0 filterpy 1.4.5 flatbuffers 23.3.3 font-roboto 0.0.1 fonts 0.0.3 fonttools 4.39.3 frozenlist 1.3.3 fsspec 2023.4.0 ftfy 6.1.1 future 0.18.3 gdown 4.7.1 gfpgan 1.3.8 gitdb 4.0.10 GitPython 3.1.30 google-auth 2.17.2 google-auth-oauthlib 1.0.0 gradio 3.23.0 grpcio 1.53.0 h11 0.12.0 httpcore 0.15.0 httpx 0.23.3 huggingface-hub 0.13.4 humanfriendly 10.0 idna 2.10 imageio 2.27.0 inflection 0.5.1 invisible-watermark 0.1.5 Jinja2 3.1.2 jsonmerge 1.8.0 jsonschema 4.17.3 kiwisolver 1.4.4 kornia 0.6.7 lark 1.1.2 lazy_loader 0.2 lightning-utilities 0.8.0 linkify-it-py 2.0.0 lit 16.0.1 llvmlite 0.39.1 lmdb 1.4.1 lpips 0.1.4 lxml 4.9.2 Markdown 3.4.3 markdown-it-py 2.2.0 MarkupSafe 2.1.2 matplotlib 3.7.1 mdit-py-plugins 0.3.3 mdurl 0.1.2 mpmath 1.3.0 multidict 6.0.4 networkx 3.1 numba 0.56.4 numpy 1.23.3 nvidia-cublas-cu11 11.10.3.66 nvidia-cuda-cupti-cu11 11.7.101 nvidia-cuda-nvrtc-cu11 11.7.99 nvidia-cuda-runtime-cu11 11.7.99 nvidia-cudnn-cu11 8.5.0.96 nvidia-cufft-cu11 10.9.0.58 nvidia-curand-cu11 10.2.10.91 nvidia-cusolver-cu11 11.4.0.1 nvidia-cusparse-cu11 11.7.4.91 nvidia-nccl-cu11 2.14.3 nvidia-nvtx-cu11 11.7.91 oauthlib 3.2.2 omegaconf 2.2.3 onnx 1.13.1 onnxruntime 1.14.1 open-clip-torch 2.7.0 opencv-contrib-python 4.7.0.72 opencv-python 4.7.0.72 orjson 3.8.10 packaging 23.0 pandas 2.0.0 pep517 0.13.0 piexif 1.1.3 Pillow 9.4.0 pip 23.0.1 protobuf 3.20.0 psutil 5.9.4 pyasn1 0.4.8 pyasn1-modules 0.2.8 pydantic 1.10.7 pyDeprecate 0.3.2 pydub 0.25.1 Pygments 2.15.0 pyparsing 3.0.9 pyrsistent 0.19.3 PySocks 1.7.1 python-dateutil 2.8.2 python-multipart 0.0.6 pytorch-lightning 1.9.4 pytz 2023.3 PyWavelets 1.4.1 PyYAML 6.0 realesrgan 0.3.0 regex 2023.3.23 reportlab 3.6.12 requests 2.25.1 requests-oauthlib 1.3.1 resize-right 0.0.2 rfc3986 1.5.0 rich 13.3.3 rsa 4.9 safetensors 0.3.0 scikit-image 0.19.2 scipy 1.10.1 semantic-version 2.10.0 sentencepiece 0.1.97 setuptools 65.6.3 six 1.16.0 smmap 5.0.0 sniffio 1.3.0 soupsieve 2.4 starlette 0.26.1 svglib 1.5.1 sympy 1.11.1 tb-nightly 2.13.0a20230410 tensorboard 2.12.1 tensorboard-data-server 0.7.0 tensorboard-plugin-wit 1.8.1 tifffile 2023.3.21 timm 0.6.7 tinycss2 1.2.1 tokenizers 0.13.3 tomli 2.0.1 toolz 0.12.0 torch 2.0.0 torchaudio 2.0.1 torchdiffeq 0.2.3 torchmetrics 0.11.4 torchsde 0.2.5 torchvision 0.15.1 tqdm 4.65.0 trampoline 0.1.2 transformers 4.25.1 triton 2.0.0 typing_extensions 4.5.0 tzdata 2023.3 uc-micro-py 1.0.1 urllib3 1.26.15 uvicorn 0.21.1 wcwidth 0.2.6 webencodings 0.5.1 websockets 11.0.1 Werkzeug 2.2.3 wheel 0.38.4 yapf 0.32.0 yarl 1.8.2

Steps to reproduce the problem

  1. start launch.py
  2. generate
  3. error

What should have happened?

show image

Commit where the problem happens

22bcc7be428c94e9408f589966c2040187245d81

What platforms do you use to access the UI ?

Linux

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--deepdanbooru --port 6006 --listen --enable-insecure-extension-access

List of extensions

sd-3dmodel-loader
sd-webui-3d-open-pose-editor sd-webui-controlnet stable-diffusion-webui-chinese LDSR
Lora
ScuNET
SwinIR
prompt-bracket-checker

Console logs

(sd-webui) [root@iZt4n56uz5lti8kjefl8ivZ stable-diffusion-webui]# python launch.py --deepdanbooru --port 6007 --listen --enable-insecure-extension-access
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI

Launching Web UI with arguments: --deepdanbooru --port 6007 --listen --enable-insecure-extension-access
No module 'xformers'. Proceeding without it.
/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torchvision/transforms/functional_tensor.py:5: UserWarning: The torchvision.transforms.functional_tensor module is deprecated in 0.15 and will be **removed in 0.17**. Please don't rely on it. You probably just need to use APIs in torchvision.transforms.functional or in torchvision.transforms.v2.functional.
  warnings.warn(
Loading weights [6ce0161689] from /home/code/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /home/code/stable-diffusion-webui/configs/v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(0): 
Model loaded in 4.0s (load weights from disk: 0.3s, create model: 0.9s, apply weights to model: 1.8s, apply half(): 0.5s, move model to device: 0.5s).
Running on local URL:  http://0.0.0.0:6007

To create a public link, set `share=True` in `launch()`.
Startup time: 10.9s (import torch: 1.4s, import gradio: 1.0s, import ldm: 0.6s, other imports: 2.1s, load scripts: 1.1s, load SD checkpoint: 4.1s, create ui: 0.4s, gradio launch: 0.1s).
  0%|                                                                                               | 0/20 [00:08<?, ?it/s]
Error completing request
Arguments: ('task(be725i62pld1wxe)', '', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, [], 0, <scripts.external_code.ControlNetUnit object at 0x7f630f462c80>, False, False, 'positive', 'comma', 0, False, False, '', 1, '', 0, '', 0, '', True, False, False, False, 0, None, False, 50) {}
Traceback (most recent call last):
  File "/home/code/stable-diffusion-webui/modules/call_queue.py", line 56, in f
    res = list(func(*args, **kwargs))
  File "/home/code/stable-diffusion-webui/modules/call_queue.py", line 37, in f
    res = func(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/modules/txt2img.py", line 56, in txt2img
    processed = process_images(p)
  File "/home/code/stable-diffusion-webui/modules/processing.py", line 503, in process_images
    res = process_images_inner(p)
  File "/home/code/stable-diffusion-webui/modules/processing.py", line 653, in process_images_inner
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
  File "/home/code/stable-diffusion-webui/modules/processing.py", line 869, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
  File "/home/code/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/home/code/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 234, in launch_sampling
    return func()
  File "/home/code/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 358, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
  File "/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 145, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 126, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
  File "/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/home/code/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/modules/sd_hijack_utils.py", line 17, in <lambda>
    setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
  File "/home/code/stable-diffusion-webui/modules/sd_hijack_utils.py", line 28, in __call__
    return self.__orig_func(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 858, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 1335, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 797, in forward
    h = module(h, emb, context)
  File "/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/openaimodel.py", line 86, in forward
    x = layer(x)
  File "/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "/home/code/stable-diffusion-webui/extensions-builtin/Lora/lora.py", line 319, in lora_Conv2d_forward
    return torch.nn.Conv2d_forward_before_lora(self, input)
  File "/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/root/miniconda/envs/sd-webui/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: FIND was unable to find an engine to execute this computation

Additional information

No response

zxcvbnm0abc commented 1 year ago

I got the same issus,I want to upgrade pytorch and cu,everything installed but crach.

SKBL5694 commented 1 year ago

I get the same bug, and I change pytorch 2.0.0 to torch1.13.1+cu117(I guess 1.x.x all works well), it works well.

liuxianyi commented 1 year ago

i got the same issue, what should i do, bro.

AmadouTidjani commented 1 year ago

Hello, I got the same issue. How to solve it please ?

htaoruan commented 1 year ago

Hello, I got the same issue. How to solve it please ?

AmadouTidjani commented 1 year ago

Hi htaoruan,

I only changed my torch version: from version 2.0.0 to version 1.13.1 which seemed more stable.

htaoruan commented 1 year ago

Thank you for your answer. I tried torch version 2.0.0 and version 1.13.1, but still encountered an error

AmadouTidjani commented 1 year ago

Is it the same error message ?

zhedahe commented 1 year ago

same error, anyone konw how to solve it?

zhedahe commented 1 year ago

hi, all brothers, I solve this bug by following steps, firstly uninstall your installed torch and torchvision, then input: pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117, and restart webui.sh, it is ok! wish my expericence helpful for you.

zhedahe commented 1 year ago

hi, all brothers, I solve this bug by following steps, firstly uninstall your installed torch and torchvision, then input: pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117, and restart webui.sh, it is ok! wish my expericence helpful for you.

ps: my gpu is 2080ti, nvidia-smi cmd shows cuda driver version is: 11.4, python version: 3.10.9. at begain, I firstly install pytorch 2.0.1, result shows wrong info as above described.

haddis3 commented 1 year ago

hi, all brothers, I solve this bug by following steps, firstly uninstall your installed torch and torchvision, then input: pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117, and restart webui.sh, it is ok! wish my expericence helpful for you.

tx bruh, it worked. ps: my gpu is a6000 cuda driver version is 11.4

lijain commented 1 year ago

你的环境配置什么的都没问题,你可以检查下你的torch引用的cuda和cudnn是否没有冲突,如果有解决就可以了