[X] I have searched the existing issues and checked the recent builds/commits
What happened?
when running, after clicking generate, an error is thrown TypeError: Partitioner.__init__() missing 1 required positional argument: 'options' there is no documentation available for what the variable options should be from openvino.frontend.pytorch.torchdynamo.partition import Partitioner
Steps to reproduce the problem
git clone automatic1111
get script from openvinotoolkit/stable-diffusion-webui/scripts
run webui-user.bat
What should have happened?
normal behaviour is running cl and compiling for my system.
PS C:\Users\sd\stable-diffusion-webui> .\webui-user.bat
venv "C:\Users\sd\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.10.1
Commit hash: 82a973c04367123ae98bd9abdf80d9eda9b910e2
Launching Web UI with arguments: --skip-torch-cuda-test --disable-safe-unpickle --lowvram --no-half
C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\timm\models\layers\__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [6ce0161689] from C:\Users\sd\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: C:\Users\sd\stable-diffusion-webui\configs\v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 14.9s (prepare environment: 0.6s, import torch: 4.9s, import gradio: 1.1s, setup paths: 1.1s, initialize shared: 0.3s, other imports: 1.1s, load scripts: 3.2s, create ui: 1.8s, gradio launch: 0.5s).
Applying attention optimization: InvokeAI... done.
Model loaded in 8.3s (load weights from disk: 1.7s, create model: 0.8s, apply weights to model: 5.5s, calculate empty prompt: 0.2s).
{}
Loading weights [6ce0161689] from C:\Users\sd\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
OpenVINO Script: created model from config : C:\Users\sd\stable-diffusion-webui\configs\v1-inference.yaml
Fetching 11 files: 100%|███████████████████████████████████████████████████████████████████████| 11/11 [00:00<?, ?it/s]
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 6/6 [00:00<00:00, 14.79it/s]
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
0%| | 0/20 [00:00<?, ?it/s]Partitioner.__init__() missing 1 required positional argument: 'options'
0%| | 0/20 [00:45<?, ?it/s]
*** Error completing request
*** Arguments: ('task(rkxbdpf2rvvwm1j)', <gradio.routes.Request object at 0x00000146C7083AF0>, 'a cat', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 1, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, 'None', 'None', 'CPU', True, 'Euler a', True, False, 'Latent', 10, 0.5, False, 'None', 0.8, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\Users\sd\stable-diffusion-webui\scripts\openvino_accelerate.py", line 218, in openvino_fx
partitioner = Partitioner()
TypeError: Partitioner.__init__() missing 1 required positional argument: 'options'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\cpp_builder.py", line 331, in _run_compile_cmd
status = subprocess.check_output(args=cmd, cwd=cwd, stderr=subprocess.STDOUT)
File "C:\Users\sd\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 420, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\sd\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['cl', '/I', 'C:/Users/sd/AppData/Local/Programs/Python/Python310/Include', '/I', 'C:/Users/sd/AppData/Local/Programs/Python/Python310/Include', '/I', 'C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include', '/I', 'C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/torch/csrc/api/include', '/I', 'C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/TH', '/I', 'C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/THC', '/I', 'C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include', '/I', 'C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/torch/csrc/api/include', '/I', 'C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/TH', '/I', 'C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/THC', '/D', 'TORCH_INDUCTOR_CPP_WRAPPER', '/D', 'C10_USING_CUSTOM_GENERATED_MACROS', '/DLL', '/MD', '/O2', '/std:c++20', '/wd4819', '/wd4251', '/wd4244', '/wd4267', '/wd4275', '/wd4018', '/wd4190', '/wd4624', '/wd4067', '/wd4068', '/EHsc', '/openmp', '/openmp:experimental', 'C:/Users/sd/AppData/Local/Temp/torchinductor_sd/3r/c3raa7y7ci2b2udpb5l5gvgemgtjfzweilztxrjqx6uadg6f23nn.cpp', '/LD', '/FeC:/Users/sd/AppData/Local/Temp/torchinductor_sd/3r/c3raa7y7ci2b2udpb5l5gvgemgtjfzweilztxrjqx6uadg6f23nn.pyd', '/link', '/LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/Scripts/libs', '/LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/lib', '/LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/lib', '/LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/lib', 'torch.lib', 'torch_cpu.lib', 'torch_python.lib', 'sleef.lib', 'c10.lib']' returned non-zero exit status 2.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1446, in _call_user_compiler
compiled_fn = compiler_fn(gm, self.example_inputs())
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\repro\after_dynamo.py", line 129, in __call__
compiled_gm = compiler_fn(gm, example_inputs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\__init__.py", line 2280, in __call__
return self.compiler_fn(model_, inputs_, **self.kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 114, in wrapper
return fn(model, inputs, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\scripts\openvino_accelerate.py", line 234, in openvino_fx
return compile_fx(subgraph, example_inputs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1521, in compile_fx
return aot_autograd(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\backends\common.py", line 72, in __call__
cg = aot_module_simplified(gm, example_inputs, **self.kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 1071, in aot_module_simplified
compiled_fn = dispatch_and_compile()
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 1056, in dispatch_and_compile
compiled_fn, _ = create_aot_dispatcher_function(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 522, in create_aot_dispatcher_function
return _create_aot_dispatcher_function(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_functorch\aot_autograd.py", line 759, in _create_aot_dispatcher_function
compiled_fn, fw_metadata = compiler_fn(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_functorch\_aot_autograd\jit_compile_runtime_wrappers.py", line 179, in aot_dispatch_base
compiled_fw = compiler(fw_module, updated_flat_args)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1350, in fw_compiler_base
return _fw_compiler_base(model, example_inputs, is_inference)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 1421, in _fw_compiler_base
return inner_compile(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 475, in compile_fx_inner
return wrap_compiler_debug(_compile_fx_inner, compiler_name="inductor")(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\repro\after_aot.py", line 85, in debug_wrapper
inner_compiled_fn = compiler_fn(gm, example_inputs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 661, in _compile_fx_inner
compiled_graph = FxGraphCache.load(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\codecache.py", line 1334, in load
compiled_graph = compile_fx_fn(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 570, in codegen_and_compile
compiled_graph = fx_codegen_and_compile(gm, example_inputs, **fx_kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\compile_fx.py", line 878, in fx_codegen_and_compile
compiled_fn = graph.compile_to_fn()
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\graph.py", line 1913, in compile_to_fn
return self.compile_to_module().call
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\graph.py", line 1839, in compile_to_module
return self._compile_to_module()
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\graph.py", line 1867, in _compile_to_module
mod = PyCodeCache.load_by_key_path(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\codecache.py", line 2876, in load_by_key_path
mod = _reload_python_module(key, path)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\runtime\compile_tasks.py", line 45, in _reload_python_module
exec(code, mod.__dict__, mod.__dict__)
File "C:\Users\sd\AppData\Local\Temp\torchinductor_sd\l3\cl3t6zhxu5vljeh26k6hryshvukmbnomb63tksftg7nzziteixrh.py", line 29, in <module>
cpp_fused_convolution_0 = async_compile.cpp_pybinding(['const float*', 'const float*', 'float*', 'float*'], '''
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\async_compile.py", line 223, in cpp_pybinding
return CppPythonBindingsCodeCache.load_pybinding(argtypes, source_code)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\codecache.py", line 2385, in load_pybinding
return cls.load_pybinding_async(*args, **kwargs)()
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\codecache.py", line 2377, in future
result = get_result()
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\codecache.py", line 2178, in load_fn
result = worker_fn()
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\codecache.py", line 2218, in _worker_compile_cpp
cpp_builder.build()
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\cpp_builder.py", line 1508, in build
status = run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\cpp_builder.py", line 352, in run_compile_cmd
return _run_compile_cmd(cmd_line, cwd)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_inductor\cpp_builder.py", line 346, in _run_compile_cmd
raise exc.CppCompileError(cmd, output) from e
torch._inductor.exc.CppCompileError: C++ compile error
Command:
cl /I C:/Users/sd/AppData/Local/Programs/Python/Python310/Include /I C:/Users/sd/AppData/Local/Programs/Python/Python310/Include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/torch/csrc/api/include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/TH /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/THC /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/torch/csrc/api/include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/TH /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/THC /D TORCH_INDUCTOR_CPP_WRAPPER /D C10_USING_CUSTOM_GENERATED_MACROS /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/sd/AppData/Local/Temp/torchinductor_sd/3r/c3raa7y7ci2b2udpb5l5gvgemgtjfzweilztxrjqx6uadg6f23nn.cpp /LD /FeC:/Users/sd/AppData/Local/Temp/torchinductor_sd/3r/c3raa7y7ci2b2udpb5l5gvgemgtjfzweilztxrjqx6uadg6f23nn.pyd /link /LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/Scripts/libs /LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/lib /LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/lib /LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib c10.lib
Output:
Microsoft (R) C/C++ Optimizing Compiler Version 19.41.34123 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
cl : Command line warning D9025 : overriding '/openmp' with '/openmp:experimental'
c3raa7y7ci2b2udpb5l5gvgemgtjfzweilztxrjqx6uadg6f23nn.cpp
C:/Users/sd/AppData/Local/Temp/torchinductor_sd/vu/cvuvp4i7roujum4xemrfwnb3t4c5t3r3mihr4b7iegh6tcqvdg43.h(3): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\sd\stable-diffusion-webui\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "C:\Users\sd\stable-diffusion-webui\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\modules\txt2img.py", line 106, in txt2img
processed = modules.scripts.scripts_txt2img.run(p, *p.script_args)
File "C:\Users\sd\stable-diffusion-webui\modules\scripts.py", line 780, in run
processed = script.run(p, *script_args)
File "C:\Users\sd\stable-diffusion-webui\scripts\openvino_accelerate.py", line 1276, in run
processed = process_images_openvino(p, model_config, vae_ckpt, p.sampler_name, enable_caching, override_hires, upscaler, hires_steps, d_strength, openvino_device, mode, is_xl_ckpt, refiner_ckpt, refiner_frac)
File "C:\Users\sd\stable-diffusion-webui\scripts\openvino_accelerate.py", line 998, in process_images_openvino
output = shared.sd_diffusers_model(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py", line 1000, in __call__
noise_pred = self.unet(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 465, in _fn
return fn(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 1269, in __call__
return self._torchdynamo_orig_callable(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 1064, in __call__
result = self._inner_convert(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 526, in __call__
return _compile(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 924, in _compile
guarded_code = compile_inner(code, one_graph, hooks, transform)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 666, in compile_inner
return _compile_inner(code, one_graph, hooks, transform)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_utils_internal.py", line 87, in wrapper_function
return function(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 699, in _compile_inner
out_code = transform_code_object(code, transform)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\bytecode_transformation.py", line 1322, in transform_code_object
transformations(instructions, code_options)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 219, in _fn
return fn(*args, **kwargs)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\convert_frame.py", line 634, in transform
tracer.run()
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2796, in run
super().run()
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 983, in run
while self.step():
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 895, in step
self.dispatch_table[inst.opcode](self, inst)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2987, in RETURN_VALUE
self._return(inst)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\symbolic_convert.py", line 2972, in _return
self.output.compile_subgraph(
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1142, in compile_subgraph
self.compile_and_call_fx_graph(tx, pass2.graph_output_vars(), root)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1369, in compile_and_call_fx_graph
compiled_fn = self.call_user_compiler(gm)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1416, in call_user_compiler
return self._call_user_compiler(gm)
File "C:\Users\sd\stable-diffusion-webui\venv\lib\site-packages\torch\_dynamo\output_graph.py", line 1465, in _call_user_compiler
raise BackendCompilerFailed(self.compiler_fn, e) from e
torch._dynamo.exc.BackendCompilerFailed: backend='openvino_fx' raised:
CppCompileError: C++ compile error
Command:
cl /I C:/Users/sd/AppData/Local/Programs/Python/Python310/Include /I C:/Users/sd/AppData/Local/Programs/Python/Python310/Include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/torch/csrc/api/include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/TH /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/THC /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/torch/csrc/api/include /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/TH /I C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/include/THC /D TORCH_INDUCTOR_CPP_WRAPPER /D C10_USING_CUSTOM_GENERATED_MACROS /DLL /MD /O2 /std:c++20 /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /wd4624 /wd4067 /wd4068 /EHsc /openmp /openmp:experimental C:/Users/sd/AppData/Local/Temp/torchinductor_sd/3r/c3raa7y7ci2b2udpb5l5gvgemgtjfzweilztxrjqx6uadg6f23nn.cpp /LD /FeC:/Users/sd/AppData/Local/Temp/torchinductor_sd/3r/c3raa7y7ci2b2udpb5l5gvgemgtjfzweilztxrjqx6uadg6f23nn.pyd /link /LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/Scripts/libs /LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/lib /LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/lib /LIBPATH:C:/Users/sd/stable-diffusion-webui/venv/lib/site-packages/torch/lib torch.lib torch_cpu.lib torch_python.lib sleef.lib c10.lib
Output:
Microsoft (R) C/C++ Optimizing Compiler Version 19.41.34123 for x64
Copyright (C) Microsoft Corporation. All rights reserved.
cl : Command line warning D9025 : overriding '/openmp' with '/openmp:experimental'
c3raa7y7ci2b2udpb5l5gvgemgtjfzweilztxrjqx6uadg6f23nn.cpp
C:/Users/sd/AppData/Local/Temp/torchinductor_sd/vu/cvuvp4i7roujum4xemrfwnb3t4c5t3r3mihr4b7iegh6tcqvdg43.h(3): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory
Set TORCH_LOGS="+dynamo" and TORCHDYNAMO_VERBOSE=1 for more information
You can suppress this exception and fall back to eager by setting:
import torch._dynamo
torch._dynamo.config.suppress_errors = True
Is there an existing issue for this?
What happened?
when running, after clicking generate, an error is thrown
TypeError: Partitioner.__init__() missing 1 required positional argument: 'options'
there is no documentation available for what the variable options should be from openvino.frontend.pytorch.torchdynamo.partition import PartitionerSteps to reproduce the problem
What should have happened?
normal behaviour is running cl and compiling for my system.
Sysinfo
What browsers do you use to access the UI ?
Mozilla Firefox
Console logs
Additional information
No response