openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
6.82k stars 2.17k forks source link

TypeError: StableDiffusionPipeline.__init__() got an unexpected keyword argument 'tokenizer_2' #26013

Open Neonturtle2 opened 1 month ago

Neonturtle2 commented 1 month ago

Question

When I use an SDXL model with OpenVino, I get an error. This error does not occur with other model types, or when OpenVino is disabled. I have tried looking up how to fix this, but I have found no results.

1 2

I'm not sure if this is because I installed OpenVino incorrectly, if I'm missing any dependencies, if any packages are conflicting, or if my hardware does not support it. I'm also new to AI, so I don't know too much about how this works. Any help would be appreciated.

Logs

(I have replaced my username with "user")

*** Error completing request
*** Arguments: ('task(6a0r664gtwffq87)', 'taco on a plate', '', [], 28, 'Euler', 1, 1, 12, 512, 512, False, 0.7, 1, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001D58FD5C9D0>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler', True, False, 'Latent', 10, 0.5, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\<user>\Documents\openvino webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\<user>\Documents\openvino webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\<user>\Documents\openvino webui\modules\txt2img.py", line 52, in txt2img
        processed = modules.scripts.scripts_txt2img.run(p, *args)
      File "C:\Users\<user>\Documents\openvino webui\modules\scripts.py", line 601, in run
        processed = script.run(p, *script_args)
      File "C:\Users\<user>\Documents\openvino webui\scripts\openvino_accelerate.py", line 1276, in run
        processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, override_hires, upscaler, hires_steps, d_strength, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
      File "C:\Users\<user>\Documents\openvino webui\scripts\openvino_accelerate.py", line 914, in process_images_openvino
        shared.sd_diffusers_model = get_diffusers_sd_model(model_config, vae_ckpt, sampler_name, enable_caching, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
      File "C:\Users\<user>\Documents\openvino webui\scripts\openvino_accelerate.py", line 609, in get_diffusers_sd_model
        sd_model = StableDiffusionPipeline.from_single_file(checkpoint_path, original_config_file=checkpoint_config, use_safetensors=True, variant="fp32", dtype=torch.float32)
      File "C:\Users\<user>\Documents\openvino webui\venv\lib\site-packages\diffusers\loaders.py", line 2822, in from_single_file
        pipe = download_from_original_stable_diffusion_ckpt(
      File "C:\Users\<user>\Documents\openvino webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\convert_from_ckpt.py", line 1738, in download_from_original_stable_diffusion_ckpt
        pipe = pipeline_class(
    TypeError: StableDiffusionPipeline.__init__() got an unexpected keyword argument 'tokenizer_2'

System Information

Python 3.10.9 (64-Bit) Windows 11 Intel(R) UHD Graphics 620 (iGPU) Intel(R) Core(TM) i7-8550U CPU 32GB RAM OpenVino 2024.3.0

Package Versions:

absl-py 2.1.0 accelerate 0.25.0 aiofiles 23.2.1 aiohappyeyeballs 2.3.5 aiohttp 3.10.3 aiosignal 1.3.1 altair 4.2.2 annotated-types 0.7.0 antlr4-python3-runtime 4.9.3 anyio 4.4.0 appdirs 1.4.4 astunparse 1.6.3 async-timeout 4.0.3 attrs 24.1.0 bitsandbytes 0.43.0 certifi 2024.7.4 charset-normalizer 3.3.2 clang 17.0.6 click 8.1.7 colorama 0.4.6 coloredlogs 15.0.1 contourpy 1.2.1 cycler 0.12.1 dadaptation 3.1 diffusers 0.25.0 docker-pycreds 0.4.0 easygui 0.98.3 einops 0.7.0 entrypoints 0.4 exceptiongroup 1.2.2 fairscale 0.4.13 fastapi 0.112.0 ffmpy 0.4.0 filelock 3.15.4 flatbuffers 24.3.25 fonttools 4.53.1 frozenlist 1.4.1 fsspec 2024.6.1 ftfy 6.1.1 gast 0.6.0 gitdb 4.0.11 GitPython 3.1.43 google-pasta 0.2.0 gradio 4.36.1 gradio_client 1.0.1 grpcio 1.65.5 grpcio-tools 1.65.5 h11 0.14.0 h5py 3.11.0 httpcore 1.0.5 httpx 0.27.0 huggingface-hub 0.24.5 humanfriendly 10.0 idna 3.7 imageio 2.35.0 imagesize 1.4.1 importlib_metadata 8.2.0 importlib_resources 6.4.0 intel-openmp 2021.4.0 invisible-watermark 0.2.0 Jinja2 3.1.4 jsonschema 4.23.0 jsonschema-specifications 2023.12.1 keras 3.4.1 kiwisolver 1.4.5 lazy_loader 0.4 libclang 18.1.1 lightning-utilities 0.11.6 lion-pytorch 0.0.6 lmdb 1.5.1 lycoris-lora 2.2.0.post3 Markdown 3.6 markdown-it-py 3.0.0 MarkupSafe 2.1.5 matplotlib 3.9.1 mdurl 0.1.2 mkl 2021.4.0 ml-dtypes 0.4.0 mpmath 1.3.0 multidict 6.0.5 namex 0.0.8 networkx 3.3 numpy 1.26.4 omegaconf 2.3.0 onnx 1.16.1 onnxruntime-gpu 1.17.1 open-clip-torch 2.20.0 opencv-python 4.7.0.68 openvino 2024.3.0 openvino-telemetry 2024.1.0 opt-einsum 3.3.0 optree 0.12.1 orjson 3.10.6 packaging 24.1 pandas 2.2.2 pathtools 0.1.2 pillow 10.4.0 pip 24.2 prodigyopt 1.0 protobuf 5.27.3 psutil 6.0.0 pydantic 2.8.2 pydantic_core 2.20.1 pydub 0.25.1 Pygments 2.18.0 pyparsing 3.1.2 pyreadline3 3.4.1 python-dateutil 2.9.0.post0 python-multipart 0.0.9 pytorch-lightning 1.9.0 pytz 2024.1 PyWavelets 1.6.0 PyYAML 6.0.1 referencing 0.35.1 regex 2024.7.24 requests 2.32.3 rich 13.7.1 rpds-py 0.19.1 ruff 0.5.6 safetensors 0.4.2 scikit-image 0.24.0 scipy 1.11.4 semantic-version 2.10.0 sentencepiece 0.2.0 sentry-sdk 2.12.0 setproctitle 1.3.3 setuptools 65.5.0 shellingham 1.5.4 six 1.16.0 smmap 5.0.1 sniffio 1.3.1 starlette 0.37.2 sympy 1.13.1 tbb 2021.11.0 tensorboard 2.17.0 tensorboard-data-server 0.7.2 tensorflow 2.17.0 tensorflow-intel 2.17.0 tensorflow-io 0.31.0 tensorflow-io-gcs-filesystem 0.31.0 termcolor 2.4.0 tifffile 2024.8.10 timm 0.6.12 tk 0.1.0 tokenizers 0.19.1 toml 0.10.2 tomlkit 0.12.0 toolz 0.12.1 torch 2.1.2+cu118 torchaudio 2.1.2+cu118 torchmetrics 1.4.1 torchvision 0.16.2+cu118 tqdm 4.66.5 transformers 4.45.0.dev0 typer 0.12.3 typing_extensions 4.12.2 tzdata 2024.1 urllib3 2.2.2 uvicorn 0.30.5 voluptuous 0.13.1 vswhere 1.4.0 wandb 0.15.11 wcwidth 0.2.13 websockets 11.0.3 Werkzeug 3.0.3 wheel 0.44.0 wrapt 1.16.0 xformers 0.0.23.post1+cu118 yarl 1.9.4 zipp 3.19.2

likholat commented 1 month ago

@Neonturtle2 please check Loaded checkpoint is SDXL checkpoint checkbox for SDXL models.

image

Neonturtle2 commented 1 month ago

I have checked the checkmark box, but now I'm getting this new error.


torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:22,412] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:22,611] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:22,791] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:23,755] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:24,000] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:24,131] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:24,865] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,226] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,304] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,372] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,826] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,978] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:26,057] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:26,707] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:26,782] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py <function Conv2d.forward at 0x0000017793DCE200> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:27,236] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:27,349] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:27,856] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:27,925] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:29,117] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
list index out of range
Traceback (most recent call last):
  File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 201, in openvino_fx
    compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 427, in openvino_compile_cached_model
    om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
    result = super().run_node(n)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
    return submod(*args, **kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
    return originals.GroupNorm_forward(self, input)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
    return F.group_norm(
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
    return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__
    return func(*args, **kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]
  0%|                                                                                           | 0/28 [00:12<?, ?it/s]
*** Error completing request
*** Arguments: ('task(edq127kcif29xtt)', 'tacos on a plate', '', [], 28, 'Euler', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000177C21D69B0>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler', True, False, 'Latent', 10, 0.5, True, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 201, in openvino_fx
        compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 427, in openvino_compile_cached_model
        om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
    IndexError: list index out of range

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
        result = super().run_node(n)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
        return getattr(self, n.op)(n.target, args, kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
        return submod(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
        return F.group_norm(
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
        return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
        result = mode.__torch_function__(public_api, types, args, kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__
        return func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 670, in call_user_compiler
        compiled_fn = compiler_fn(gm, self.fake_example_inputs())
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\debug_utils.py", line 1055, in debug_wrapper
        compiled_gm = compiler_fn(gm, example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 107, in wrapper
        return fn(model, inputs, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 234, in openvino_fx
        return compile_fx(subgraph, example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 415, in compile_fx
        model_ = overrides.fuse_fx(model_, example_inputs_)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 96, in fuse_fx
        gm = mkldnn_fuse_fx(gm, example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\mkldnn.py", line 509, in mkldnn_fuse_fx
        ShapeProp(gm, fake_mode=fake_mode).propagate(*example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 185, in propagate
        return super().run(*args)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 136, in run
        self.env[node] = self.run_node(node)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 152, in run_node
        raise RuntimeError(
    RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': '  File "C:\\Users\\<user>\\Documents\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n    hidden_states = self.norm1(hidden_states)\n'}

    While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
    Original traceback:
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
        hidden_states = self.norm1(hidden_states)

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\Users\<user>\Documents\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\<user>\Documents\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
        processed = modules.scripts.scripts_txt2img.run(p, *args)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\modules\scripts.py", line 601, in run
        processed = script.run(p, *script_args)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 1276, in run
        processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, override_hires, upscaler, hires_steps, d_strength, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 998, in process_images_openvino
        output = shared.sd_diffusers_model(
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 1039, in __call__
        noise_pred = self.unet(
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 82, in forward
        return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 209, in _fn
        return fn(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 924, in forward
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 981, in <graph break in forward>
        aug_emb = self.add_embedding(add_embeds)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1076, in <graph break in forward>
        sample, res_samples = downsample_block(hidden_states=sample, temb=emb, scale=lora_scale)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1259, in forward
        hidden_states = resnet(hidden_states, temb, scale=scale)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 337, in catch_errors
        return callback(frame, cache_size, hooks)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 404, in _convert_frame
        result = inner_convert(frame, cache_size, hooks)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 104, in _fn
        return fn(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 262, in _convert_frame_assert
        return _compile(
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
        r = func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 324, in _compile
        out_code = transform_code_object(code, transform)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 445, in transform_code_object
        transformations(instructions, code_options)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 311, in transform
        tracer.run()
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1726, in run
        super().run()
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 576, in run
        and self.step()
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 540, in step
        getattr(self, inst.opname)(inst)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 372, in wrapper
        self.output.compile_subgraph(self, reason=reason)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 541, in compile_subgraph
        self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 588, in compile_and_call_fx_graph
        compiled_fn = self.call_user_compiler(gm)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
        r = func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 675, in call_user_compiler
        raise BackendCompilerFailed(self.compiler_fn, e) from e
    torch._dynamo.exc.BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': '  File "C:\\Users\\<user>\\Documents\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n    hidden_states = self.norm1(hidden_states)\n'}

    While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
    Original traceback:
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
        hidden_states = self.norm1(hidden_states)

    Set torch._dynamo.config.verbose=True for more information

    You can suppress this exception and fall back to eager by setting:
        torch._dynamo.config.suppress_errors = True```
andrei-kochin commented 3 weeks ago

@likholat do you have any news here?

likholat commented 3 weeks ago

@cavusmustafa could you take a look?

cavusmustafa commented 3 weeks ago

I have checked the checkmark box, but now I'm getting this new error.

torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:22,412] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:22,611] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:22,791] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:23,755] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:24,000] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:24,131] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:24,865] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,226] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,304] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,372] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,826] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:25,978] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:26,057] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:26,707] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:26,782] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py <function Conv2d.forward at 0x0000017793DCE200> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:27,236] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:27,349] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:27,856] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:27,925] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-13 20:47:29,117] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x0000017793DCCB80> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
list index out of range
Traceback (most recent call last):
  File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 201, in openvino_fx
    compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 427, in openvino_compile_cached_model
    om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
    result = super().run_node(n)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
    return submod(*args, **kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
    return originals.GroupNorm_forward(self, input)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
    return F.group_norm(
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
    return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__
    return func(*args, **kwargs)
  File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]
  0%|                                                                                           | 0/28 [00:12<?, ?it/s]
*** Error completing request
*** Arguments: ('task(edq127kcif29xtt)', 'tacos on a plate', '', [], 28, 'Euler', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x00000177C21D69B0>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler', True, False, 'Latent', 10, 0.5, True, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 201, in openvino_fx
        compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 427, in openvino_compile_cached_model
        om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
    IndexError: list index out of range

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
        result = super().run_node(n)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
        return getattr(self, n.op)(n.target, args, kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
        return submod(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
        return F.group_norm(
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
        return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
        result = mode.__torch_function__(public_api, types, args, kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__
        return func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 670, in call_user_compiler
        compiled_fn = compiler_fn(gm, self.fake_example_inputs())
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\debug_utils.py", line 1055, in debug_wrapper
        compiled_gm = compiler_fn(gm, example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 107, in wrapper
        return fn(model, inputs, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 234, in openvino_fx
        return compile_fx(subgraph, example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 415, in compile_fx
        model_ = overrides.fuse_fx(model_, example_inputs_)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 96, in fuse_fx
        gm = mkldnn_fuse_fx(gm, example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\mkldnn.py", line 509, in mkldnn_fuse_fx
        ShapeProp(gm, fake_mode=fake_mode).propagate(*example_inputs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 185, in propagate
        return super().run(*args)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\interpreter.py", line 136, in run
        self.env[node] = self.run_node(node)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 152, in run_node
        raise RuntimeError(
    RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': '  File "C:\\Users\\<user>\\Documents\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n    hidden_states = self.norm1(hidden_states)\n'}

    While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
    Original traceback:
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
        hidden_states = self.norm1(hidden_states)

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\Users\<user>\Documents\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\<user>\Documents\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
        processed = modules.scripts.scripts_txt2img.run(p, *args)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\modules\scripts.py", line 601, in run
        processed = script.run(p, *script_args)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 1276, in run
        processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, override_hires, upscaler, hires_steps, d_strength, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\scripts\openvino_accelerate.py", line 998, in process_images_openvino
        output = shared.sd_diffusers_model(
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 1039, in __call__
        noise_pred = self.unet(
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 82, in forward
        return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 209, in _fn
        return fn(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 924, in forward
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 981, in <graph break in forward>
        aug_emb = self.add_embedding(add_embeds)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1076, in <graph break in forward>
        sample, res_samples = downsample_block(hidden_states=sample, temb=emb, scale=lora_scale)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1259, in forward
        hidden_states = resnet(hidden_states, temb, scale=scale)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 337, in catch_errors
        return callback(frame, cache_size, hooks)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 404, in _convert_frame
        result = inner_convert(frame, cache_size, hooks)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 104, in _fn
        return fn(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 262, in _convert_frame_assert
        return _compile(
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
        r = func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 324, in _compile
        out_code = transform_code_object(code, transform)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 445, in transform_code_object
        transformations(instructions, code_options)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 311, in transform
        tracer.run()
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1726, in run
        super().run()
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 576, in run
        and self.step()
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 540, in step
        getattr(self, inst.opname)(inst)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 372, in wrapper
        self.output.compile_subgraph(self, reason=reason)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 541, in compile_subgraph
        self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 588, in compile_and_call_fx_graph
        compiled_fn = self.call_user_compiler(gm)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
        r = func(*args, **kwargs)
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 675, in call_user_compiler
        raise BackendCompilerFailed(self.compiler_fn, e) from e
    torch._dynamo.exc.BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': '  File "C:\\Users\\<user>\\Documents\\stable-diffusion-webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n    hidden_states = self.norm1(hidden_states)\n'}

    While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
    Original traceback:
      File "C:\Users\<user>\Documents\stable-diffusion-webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
        hidden_states = self.norm1(hidden_states)

    Set torch._dynamo.config.verbose=True for more information

    You can suppress this exception and fall back to eager by setting:
        torch._dynamo.config.suppress_errors = True```

Hi, could you delete the "cache" folder in webui directory and try again please?

Neonturtle2 commented 3 weeks ago

I'm still getting an error:

torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:54,213] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:54,260] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:54,291] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:54,494] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:54,572] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:54,603] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:54,869] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:55,072] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:55,103] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:55,150] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:55,340] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:55,402] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:55,433] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:57,762] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:57,793] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\conv.py <function Conv2d.forward at 0x000001E367E35BD0> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:58,028] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:58,090] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:58,449] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:58,481] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
[2024-08-19 19:24:59,043] torch._dynamo.symbolic_convert: [WARNING] C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\linear.py <function Linear.forward at 0x000001E367E34550> [UserDefinedObjectVariable(LoraPatches), NNModuleVariable(), TensorVariable()] {} too many positional arguments
list index out of range
Traceback (most recent call last):
  File "C:\Users\<user>\Desktop\AI\openvino webui\scripts\openvino_accelerate.py", line 201, in openvino_fx
    compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
  File "C:\Users\<user>\Desktop\AI\openvino webui\scripts\openvino_accelerate.py", line 427, in openvino_compile_cached_model
    om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
    result = super().run_node(n)
  File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
    return getattr(self, n.op)(n.target, args, kwargs)
  File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
    return submod(*args, **kwargs)
  File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
    return forward_call(*args, **kwargs)
  File "C:\Users\<user>\Desktop\AI\openvino webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
    return originals.GroupNorm_forward(self, input)
  File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
    return F.group_norm(
  File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
    return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
  File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
    result = mode.__torch_function__(public_api, types, args, kwargs)
  File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__
    return func(*args, **kwargs)
  File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
    return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]
  0%|                                                                                           | 0/28 [00:05<?, ?it/s]
*** Error completing request
*** Arguments: ('task(9a38ksgr9bdr54q)', 'tacos on a plate', '', [], 28, 'Euler', 1, 1, 12, 512, 512, False, 0.7, 1, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001E3164FFD90>, 1, False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler', True, False, 'Latent', 10, 0.5, True, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "C:\Users\<user>\Desktop\AI\openvino webui\scripts\openvino_accelerate.py", line 201, in openvino_fx
        compiled_model = openvino_compile_cached_model(maybe_fs_cached_name, *example_inputs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\scripts\openvino_accelerate.py", line 427, in openvino_compile_cached_model
        om.inputs[idx].get_node().set_element_type(dtype_mapping[input_data.dtype])
    IndexError: list index out of range

    During handling of the above exception, another exception occurred:

    Traceback (most recent call last):
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 147, in run_node
        result = super().run_node(n)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\fx\interpreter.py", line 177, in run_node
        return getattr(self, n.op)(n.target, args, kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\fx\interpreter.py", line 294, in call_module
        return submod(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\extensions-builtin\Lora\networks.py", line 459, in network_GroupNorm_forward
        return originals.GroupNorm_forward(self, input)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\normalization.py", line 273, in forward
        return F.group_norm(
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\functional.py", line 2526, in group_norm
        return handle_torch_function(group_norm, (input, weight, bias,), input, num_groups, weight=weight, bias=bias, eps=eps)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\overrides.py", line 1534, in handle_torch_function
        result = mode.__torch_function__(public_api, types, args, kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 38, in __torch_function__
        return func(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\functional.py", line 2530, in group_norm
        return torch.group_norm(input, num_groups, weight, bias, eps, torch.backends.cudnn.enabled)
    RuntimeError: Expected weight to be a vector of size equal to the number of channels in input, but got weight of shape [320] and input of shape [2, 1280]

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 670, in call_user_compiler
        compiled_fn = compiler_fn(gm, self.fake_example_inputs())
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\debug_utils.py", line 1055, in debug_wrapper
        compiled_gm = compiler_fn(gm, example_inputs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 107, in wrapper
        return fn(model, inputs, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\scripts\openvino_accelerate.py", line 234, in openvino_fx
        return compile_fx(subgraph, example_inputs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 415, in compile_fx
        model_ = overrides.fuse_fx(model_, example_inputs_)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_inductor\overrides.py", line 96, in fuse_fx
        gm = mkldnn_fuse_fx(gm, example_inputs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_inductor\mkldnn.py", line 509, in mkldnn_fuse_fx
        ShapeProp(gm, fake_mode=fake_mode).propagate(*example_inputs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 185, in propagate
        return super().run(*args)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\fx\interpreter.py", line 136, in run
        self.env[node] = self.run_node(node)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\fx\passes\shape_prop.py", line 152, in run_node
        raise RuntimeError(
    RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': '  File "C:\\Users\\<user>\\Desktop\\AI\\openvino webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n    hidden_states = self.norm1(hidden_states)\n'}

    While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
    Original traceback:
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
        hidden_states = self.norm1(hidden_states)

    The above exception was the direct cause of the following exception:

    Traceback (most recent call last):
      File "C:\Users\<user>\Desktop\AI\openvino webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\<user>\Desktop\AI\openvino webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\modules\txt2img.py", line 52, in txt2img
        processed = modules.scripts.scripts_txt2img.run(p, *args)
      File "C:\Users\<user>\Desktop\AI\openvino webui\modules\scripts.py", line 601, in run
        processed = script.run(p, *script_args)
      File "C:\Users\<user>\Desktop\AI\openvino webui\scripts\openvino_accelerate.py", line 1276, in run
        processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, override_hires, upscaler, hires_steps, d_strength, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
      File "C:\Users\<user>\Desktop\AI\openvino webui\scripts\openvino_accelerate.py", line 998, in process_images_openvino
        output = shared.sd_diffusers_model(
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion_xl\pipeline_stable_diffusion_xl.py", line 1039, in __call__
        noise_pred = self.unet(
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 82, in forward
        return self.dynamo_ctx(self._orig_mod.forward)(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 209, in _fn
        return fn(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 924, in forward
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 981, in <graph break in forward>
        aug_emb = self.add_embedding(add_embeds)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\diffusers\models\unet_2d_condition.py", line 1076, in <graph break in forward>
        sample, res_samples = downsample_block(hidden_states=sample, temb=emb, scale=lora_scale)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 1259, in forward
        hidden_states = resnet(hidden_states, temb, scale=scale)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 337, in catch_errors
        return callback(frame, cache_size, hooks)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 404, in _convert_frame
        result = inner_convert(frame, cache_size, hooks)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 104, in _fn
        return fn(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 262, in _convert_frame_assert
        return _compile(
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
        r = func(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 324, in _compile
        out_code = transform_code_object(code, transform)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 445, in transform_code_object
        transformations(instructions, code_options)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 311, in transform
        tracer.run()
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 1726, in run
        super().run()
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 576, in run
        and self.step()
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 540, in step
        getattr(self, inst.opname)(inst)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 372, in wrapper
        self.output.compile_subgraph(self, reason=reason)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 541, in compile_subgraph
        self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 588, in compile_and_call_fx_graph
        compiled_fn = self.call_user_compiler(gm)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\utils.py", line 163, in time_wrapper
        r = func(*args, **kwargs)
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 675, in call_user_compiler
        raise BackendCompilerFailed(self.compiler_fn, e) from e
    torch._dynamo.exc.BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': '  File "C:\\Users\\<user>\\Desktop\\AI\\openvino webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n    hidden_states = self.norm1(hidden_states)\n'}

    While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {})
    Original traceback:
      File "C:\Users\<user>\Desktop\AI\openvino webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward
        hidden_states = self.norm1(hidden_states)

    Set torch._dynamo.config.verbose=True for more information

    You can suppress this exception and fall back to eager by setting:
        torch._dynamo.config.suppress_errors = True

Also says this in the GUI:

BackendCompilerFailed: openvino_fx raised RuntimeError: ShapeProp error for: node=%self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) with meta={'nn_module_stack': {'self_norm1': <class 'torch.nn.modules.normalization.GroupNorm'>}, 'stack_trace': ' File "C:\\Users\\IWasA\\Desktop\\AI\\openvino webui\\venv\\lib\\site-packages\\diffusers\\models\\resnet.py", line 691, in forward\n hidden_states = self.norm1(hidden_states)\n'} While executing %self_norm1 : [#users=1] = call_module[target=self_norm1](args = (%input_tensor,), kwargs = {}) Original traceback: File "C:\Users\IWasA\Desktop\AI\openvino webui\venv\lib\site-packages\diffusers\models\resnet.py", line 691, in forward hidden_states = self.norm1(hidden_states) Set torch._dynamo.config.verbose=True for more information You can suppress this exception and fall back to eager by setting: torch._dynamo.config.suppress_errors = True

Not sure if this is different from the error I got last time.