Closed daFritz84 closed 9 months ago
That looks like an ipexrun error. Try removing --use-ipex argument to disable it.
Also added DISABLE_IPEXRUN environment variable for this in dev branch.
Also use the Diffusers backend, original A1111 backend has terrible performance with Intel ARC.
Tried to make a run without '--use-ipex', now I get a segmentation fault after one successful run with Original backend. Will try Diffusers Backend next
./webui.sh --debug INT ✘ 25m 27s
Create and activate python venv
Setting OneAPI environment
:: initializing oneAPI environment ...
webui.sh: BASH_VERSION = 5.2.21(1)-release
args: Using "$@" for setvars.sh arguments: --debug
:: advisor -- latest
:: ccl -- latest
:: compiler -- latest
:: dal -- latest
:: debugger -- latest
:: dev-utilities -- latest
:: dnnl -- latest
:: dpcpp-ct -- latest
:: dpl -- latest
:: ipp -- latest
:: ippcp -- latest
:: mkl -- latest
:: mpi -- latest
:: tbb -- latest
:: vtune -- latest
:: oneAPI environment initialized ::
Launching launch.py...
08:48:41-740300 INFO Starting SD.Next
08:48:41-742946 INFO Logger: file="/home/sseifried/stable-diffusion-webui/sdnext.log" level=DEBUG size=64 mode=create
08:48:41-744019 INFO Python 3.11.6 on Linux
08:48:41-757095 INFO Version: app=sd.next updated=2023-12-17 hash=83785628 url=https://github.com/vladmandic/automatic/tree/master
08:48:42-057570 INFO Platform: arch=x86_64 cpu= system=Linux release=6.6.7-1-MANJARO python=3.11.6
08:48:42-062449 DEBUG Setting environment tuning
08:48:42-066131 DEBUG Cache folder: /home/sseifried/.cache/huggingface/hub
08:48:42-069699 DEBUG Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
08:48:42-074010 DEBUG Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
08:48:42-081100 INFO Intel OneAPI Toolkit detected
08:48:42-085653 DEBUG Package not found: onnxruntime-openvino
08:48:42-088002 INFO Installing package: onnxruntime-openvino
08:48:42-090131 DEBUG Running pip: install --upgrade onnxruntime-openvino
08:48:42-718248 DEBUG Repository update time: Sun Dec 17 02:09:21 2023
08:48:42-719385 INFO Startup: standard
08:48:42-720199 INFO Verifying requirements
08:48:42-741506 INFO Verifying packages
08:48:42-743292 INFO Verifying submodules
08:48:43-000775 DEBUG Submodule: extensions-builtin/sd-extension-chainner / main
08:48:43-010233 DEBUG Submodule: extensions-builtin/sd-extension-system-info / main
08:48:43-019062 DEBUG Submodule: extensions-builtin/sd-webui-agent-scheduler / main
08:48:43-027493 DEBUG Submodule: extensions-builtin/sd-webui-controlnet / main
08:48:43-042813 DEBUG Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
08:48:43-053836 DEBUG Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
08:48:43-063244 DEBUG Submodule: modules/k-diffusion / master
08:48:43-073148 DEBUG Submodule: modules/lora / main
08:48:43-082758 DEBUG Submodule: wiki / master
08:48:43-088758 DEBUG Register paths
08:48:43-140805 DEBUG Installed packages: 219
08:48:43-141655 DEBUG Extensions all: ['sd-webui-agent-scheduler', 'sd-extension-chainner', 'stable-diffusion-webui-rembg', 'sd-extension-system-info', 'Lora', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser']
08:48:43-142802 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/sd-webui-agent-scheduler/install.py
08:48:43-398198 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/stable-diffusion-webui-rembg/install.py
08:48:43-604143 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/sd-extension-system-info/install.py
08:48:43-856870 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/sd-webui-controlnet/install.py
08:48:44-061333 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/stable-diffusion-webui-images-browser/install.py
08:48:44-268954 DEBUG Extensions all: []
08:48:44-269764 INFO Extensions enabled: ['sd-webui-agent-scheduler', 'sd-extension-chainner', 'stable-diffusion-webui-rembg', 'sd-extension-system-info', 'Lora', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser']
08:48:44-270649 INFO Verifying requirements
08:48:44-298841 DEBUG Setup complete without errors: 1702799324
08:48:44-301179 INFO Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
08:48:44-302363 DEBUG Starting module: <module 'webui' from '/home/sseifried/stable-diffusion-webui/webui.py'>
08:48:44-303339 INFO Command line args: ['--debug'] debug=True
/home/sseifried/stable-diffusion-webui/venv/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
08:48:46-428768 DEBUG Load IPEX==2.0.110+xpu
08:48:47-866814 INFO Load packages: torch=2.0.1a0+cxx11.abi diffusers=0.24.0 gradio=3.43.2
08:48:48-335955 DEBUG Read: file="config.json" json=13 bytes=522
08:48:48-338991 INFO Engine: backend=Backend.ORIGINAL compute=ipex mode=no_grad device=xpu cross-optimization="Scaled-Dot-Product"
08:48:48-340388 INFO Device: device=Intel(R) Arc(TM) A750 Graphics n=1 ipex=2.0.110+xpu
2023-12-17 08:48:50.435879: I itex/core/wrapper/itex_gpu_wrapper.cc:35] Intel Extension for Tensorflow* GPU backend is loaded.
2023-12-17 08:48:50.494029: W itex/core/ops/op_init.cc:58] Op: _QuantizedMaxPool3D is already registered in Tensorflow
2023-12-17 08:48:50.510800: I itex/core/devices/gpu/itex_gpu_runtime.cc:129] Selected platform: Intel(R) Level-Zero
2023-12-17 08:48:50.511045: I itex/core/devices/gpu/itex_gpu_runtime.cc:154] number of sub-devices is zero, expose root device.
08:48:50-963881 DEBUG Entering start sequence
08:48:50-965080 DEBUG Initializing
08:48:50-966571 INFO Available VAEs: path="models/VAE" items=0
08:48:50-967412 INFO Disabling uncompatible extensions: backend=Backend.ORIGINAL []
08:48:50-968474 DEBUG Read: file="cache.json" json=1 bytes=185
08:48:50-969406 DEBUG Read: file="metadata.json" json=1 bytes=106
08:48:50-970210 INFO Available models: path="models/Stable-diffusion" items=1 time=0.00
08:48:51-109956 DEBUG Load extensions
08:48:52-045920 INFO Extension: script='extensions-builtin/sd-webui-agent-scheduler/scripts/task_scheduler.py' Using sqlite file: extensions-builtin/sd-webui-agent-scheduler/task_scheduler.sqlite3
08:48:52-145946 INFO Extension: script='extensions-builtin/sd-webui-controlnet/scripts/api.py' ControlNet preprocessor location:
/home/sseifried/stable-diffusion-webui/extensions-builtin/sd-webui-controlnet/annotator/downloads
08:48:52-209579 INFO Extension: script='extensions-builtin/sd-webui-controlnet/scripts/controlnet.py' Warning: ControlNet failed to load SGM - will use LDM instead.
08:48:52-216903 INFO Extension: script='extensions-builtin/sd-webui-controlnet/scripts/hook.py' Warning: ControlNet failed to load SGM - will use LDM instead.
08:48:52-631266 INFO Extensions time: 1.52 { Lora=0.38 sd-webui-agent-scheduler=0.50 sd-webui-controlnet=0.17 stable-diffusion-webui-rembg=0.37 }
08:48:52-667794 DEBUG Read: file="html/upscalers.json" json=4 bytes=2640
08:48:52-669172 DEBUG Read: file="extensions-builtin/sd-extension-chainner/models.json" json=24 bytes=2693
08:48:52-670644 DEBUG chaiNNer models: path="models/chaiNNer" defined=24 discovered=0 downloaded=0
08:48:52-672518 DEBUG Load upscalers: total=50 downloaded=0 user=0 time=0.04 ['None', 'Lanczos', 'Nearest', 'ChaiNNer', 'LDSR', 'RealESRGAN', 'SCUNet', 'SwinIR', 'SD', 'ESRGAN']
08:48:52-680938 DEBUG Load styles: folder="models/styles" items=288 time=0.01
08:48:52-683212 DEBUG Creating UI
08:48:52-765805 INFO Load UI theme: name="black-teal" style=Auto base=sdnext.css
08:48:52-811867 DEBUG Extra networks: page='model' items=1 subfolders=1 tab=txt2img folders=['models/Stable-diffusion', 'models/Diffusers', 'models/Reference',
'/home/sseifried/stable-diffusion-webui/models/Stable-diffusion'] list=0.00 desc=0.00 info=0.00 workers=2
08:48:52-827057 DEBUG Extra networks: page='style' items=288 subfolders=2 tab=txt2img folders=['models/styles', 'html'] list=0.01 desc=0.00 info=0.00 workers=2
08:48:52-828889 DEBUG Extra networks: page='embedding' items=0 subfolders=1 tab=txt2img folders=['models/embeddings'] list=0.00 desc=0.00 info=0.00 workers=2
08:48:52-830272 DEBUG Extra networks: page='hypernetwork' items=0 subfolders=1 tab=txt2img folders=['models/hypernetworks'] list=0.00 desc=0.00 info=0.00 workers=2
08:48:52-831935 DEBUG Extra networks: page='vae' items=0 subfolders=1 tab=txt2img folders=['models/VAE'] list=0.00 desc=0.00 info=0.00 workers=2
08:48:52-833176 DEBUG Extra networks: page='lora' items=0 subfolders=1 tab=txt2img folders=['models/Lora', 'models/LyCORIS'] list=0.00 desc=0.00 info=0.00 workers=2
08:48:53-050588 DEBUG Read: file="ui-config.json" json=0 bytes=2
08:48:53-155320 DEBUG Themes: builtin=6 default=5 external=55
08:48:53-768061 DEBUG Script: 0.54 ui_tabs /home/sseifried/stable-diffusion-webui/extensions-builtin/stable-diffusion-webui-images-browser/scripts/image_browser.py
08:48:53-825558 DEBUG Extension list: processed=7 installed=7 enabled=7 disabled=0 visible=7 hidden=0
08:48:54-176120 INFO Local URL: http://127.0.0.1:7860/
08:48:54-176998 DEBUG Gradio functions: registered=2101
08:48:54-177702 INFO Initializing middleware
08:48:54-181096 DEBUG Creating API
08:48:54-337073 INFO [AgentScheduler] Task queue is empty
08:48:54-338663 INFO [AgentScheduler] Registering APIs
08:48:54-458806 DEBUG Scripts setup: ['X/Y/Z Grid:0.006', 'ControlNet:0.063']
08:48:54-461214 DEBUG Model metadata: file="metadata.json" no changes
08:48:54-461954 DEBUG Model auto load disabled
08:48:54-462825 DEBUG Save: file="config.json" json=13 bytes=522
08:48:54-463619 INFO Startup time: 10.15 { torch=2.83 gradio=0.70 libraries=3.10 extensions=1.52 face-restore=0.14 ui-extra-networks=0.15 ui-txt2img=0.06 ui-img2img=0.08 ui-settings=0.18
ui-extensions=0.63 ui-defaults=0.06 launch=0.28 api=0.09 app-started=0.19 }
08:49:08-298677 INFO MOTD: N/A
08:49:15-285882 DEBUG Themes: builtin=6 default=5 external=55
08:49:17-168747 INFO Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36 OPR/105.0.0.0
08:49:55-328246 DEBUG txt2img: id_task=task(rt6fqhmknla36wo)|prompt=bee on a
flower|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=None|latent_index=None|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|cli
p_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=600|width=800|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscale
r=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|re
finer_negative=|override_settings_texts=[]
08:49:55-333383 WARNING Selected checkpoint not found: v1-5-pruned-emaonly.safetensors
08:49:55-335811 INFO Select: model="v1-5-pruned-emaonly [6ce0161689]"
08:49:55-338274 DEBUG Load model weights: existing=False target=/home/sseifried/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors info=None
Loading model: /home/sseifried/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/4.3 GB -:--:--
08:49:55-379928 DEBUG Load model: name=/home/sseifried/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors dict=True
08:49:55-405995 DEBUG Desired Torch parameters: dtype=BF16 no-half=False no-half-vae=False upscast=False
08:49:55-407177 INFO Setting Torch parameters: device=xpu dtype=torch.bfloat16 vae=torch.bfloat16 unet=torch.bfloat16 context=no_grad fp16=False bf16=True
08:49:55-408755 DEBUG Model dict loaded: {'ram': {'used': 2.23, 'total': 31.3}, 'gpu': {'used': 0.0, 'total': 7.94}, 'retries': 0, 'oom': 0}
08:49:55-422787 DEBUG Model config loaded: {'ram': {'used': 2.23, 'total': 31.3}, 'gpu': {'used': 0.0, 'total': 7.94}, 'retries': 0, 'oom': 0}
08:50:02-041520 INFO LDM: LatentDiffusion: mode=eps
08:50:02-043245 INFO LDM: DiffusionWrapper params=859.52M
08:50:02-045260 INFO LDM: [2;36m08:50:00-499152[0m[2;36m [0m[32mDEBUG [0m Server: [33malive[0m=[3;92mTrue[0m [33mjobs[0m=[1;36m1[0m [33mrequests[0m=[1;36m187[0m [33muptime[0m=[1;36m72[0m
[33mmemory[0m=[1;36m5[0m[1;36m.59[0m/[1;36m31.3[0m [33mbackend[0m=[35mBackend[0m.ORIGINAL [33mstate[0m=[35mjob[0m=[32m"txt2img"[0m [1;36m0[0m/[1;36m0[0m
08:50:02-048528 DEBUG Model created from config: /home/sseifried/stable-diffusion-webui/configs/v1-inference.yaml
08:50:02-049706 INFO Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline file="/home/sseifried/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors"
size=4068MB
08:50:02-051519 DEBUG Model weights loading: {'ram': {'used': 6.22, 'total': 31.3}, 'gpu': {'used': 0.0, 'total': 7.94}, 'retries': 0, 'oom': 0}
08:50:03-293045 DEBUG Model weights loaded: {'ram': {'used': 9.06, 'total': 31.3}, 'gpu': {'used': 0.0, 'total': 7.94}, 'retries': 0, 'oom': 0}
08:50:04-224007 DEBUG Model weights moved: {'ram': {'used': 8.76, 'total': 31.3}, 'gpu': {'used': 2.03, 'total': 7.94}, 'retries': 0, 'oom': 0}
08:50:04-325451 INFO Applied IPEX Optimize.
08:50:04-326280 INFO Cross-attention: optimization=Scaled-Dot-Product options=[]
08:50:04-415175 INFO Load embeddings: loaded=0 skipped=0 time=0.08
08:50:04-418890 INFO Model loaded in 9.08 { create=6.63 apply=0.59 vae=0.65 move=0.93 hijack=0.11 embeddings=0.09 }
08:50:04-744390 DEBUG gc: collected=295 device=xpu {'ram': {'used': 8.81, 'total': 31.3}, 'gpu': {'used': 2.12, 'total': 7.94}, 'retries': 0, 'oom': 0}
08:50:04-749069 INFO Model load finished: {'ram': {'used': 8.81, 'total': 31.3}, 'gpu': {'used': 2.12, 'total': 7.94}, 'retries': 0, 'oom': 0} cached=0
08:50:05-205117 DEBUG Sampler: sampler="UniPC" config={}
Progress 0.22it/s ━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 5% -:--:-- 0:00:0408:50:09-901071 DEBUG Load VAE decode approximate: model="models/VAE-approx/model.pt"
Progress 1.02it/s ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:19
08:50:26-099104 DEBUG Saving: image="outputs/text/00012-v1-5-pruned-emaonly-bee on a flower.jpg" type=JPEG size=800x600
08:50:26-106332 INFO Processed: images=1 time=21.24 its=0.94 memory={'ram': {'used': 5.06, 'total': 31.3}, 'gpu': {'used': 3.42, 'total': 7.94}, 'retries': 0, 'oom': 0}
08:50:29-579032 DEBUG txt2img: id_task=task(l6wmawdxnq3gmj9)|prompt=bee on a
flower|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=None|latent_index=None|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|cli
p_skip=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=600|width=800|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscale
r=None|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|re
finer_negative=|override_settings_texts=[]
zsh: segmentation fault (core dumped) ./webui.sh --debug
What is your intel-compute-runtime and oneapi-basekit version?
What is your intel-compute-runtime and oneapi-basekit version?
intel-oneapi-basekit 2023.2.0.49397-1 [Installed]
intel-compute-runtime 23.35.27191.9-1 [Installed]
Just switched to diffusers backend. Looks like this fails right away:
./webui.sh --debug SEGV ✘ 1m 9s
Create and activate python venv
Setting OneAPI environment
:: initializing oneAPI environment ...
webui.sh: BASH_VERSION = 5.2.21(1)-release
args: Using "$@" for setvars.sh arguments: --debug
:: advisor -- latest
:: ccl -- latest
:: compiler -- latest
:: dal -- latest
:: debugger -- latest
:: dev-utilities -- latest
:: dnnl -- latest
:: dpcpp-ct -- latest
:: dpl -- latest
:: ipp -- latest
:: ippcp -- latest
:: mkl -- latest
:: mpi -- latest
:: tbb -- latest
:: vtune -- latest
:: oneAPI environment initialized ::
Launching launch.py...
09:03:51-531863 INFO Starting SD.Next
09:03:51-534737 INFO Logger: file="/home/sseifried/stable-diffusion-webui/sdnext.log" level=DEBUG size=64 mode=create
09:03:51-535899 INFO Python 3.11.6 on Linux
09:03:51-549457 INFO Version: app=sd.next updated=2023-12-17 hash=83785628 url=https://github.com/vladmandic/automatic/tree/master
09:03:51-700034 INFO Platform: arch=x86_64 cpu= system=Linux release=6.6.7-1-MANJARO python=3.11.6
09:03:51-701159 DEBUG Setting environment tuning
09:03:51-701968 DEBUG Cache folder: /home/sseifried/.cache/huggingface/hub
09:03:51-702764 DEBUG Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
09:03:51-703788 DEBUG Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
09:03:51-705642 INFO Intel OneAPI Toolkit detected
09:03:51-706663 DEBUG Package not found: onnxruntime-openvino
09:03:51-707431 INFO Installing package: onnxruntime-openvino
09:03:51-708092 DEBUG Running pip: install --upgrade onnxruntime-openvino
09:03:52-416091 DEBUG Repository update time: Sun Dec 17 02:09:21 2023
09:03:52-417318 INFO Startup: standard
09:03:52-418199 INFO Verifying requirements
09:03:52-444074 INFO Verifying packages
09:03:52-446184 INFO Verifying submodules
09:03:52-731330 DEBUG Submodule: extensions-builtin/sd-extension-chainner / main
09:03:52-741738 DEBUG Submodule: extensions-builtin/sd-extension-system-info / main
09:03:52-751952 DEBUG Submodule: extensions-builtin/sd-webui-agent-scheduler / main
09:03:52-761437 DEBUG Submodule: extensions-builtin/sd-webui-controlnet / main
09:03:52-778229 DEBUG Submodule: extensions-builtin/stable-diffusion-webui-images-browser / main
09:03:52-789699 DEBUG Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
09:03:52-799159 DEBUG Submodule: modules/k-diffusion / master
09:03:52-808410 DEBUG Submodule: modules/lora / main
09:03:52-818159 DEBUG Submodule: wiki / master
09:03:52-824254 DEBUG Register paths
09:03:52-880580 DEBUG Installed packages: 219
09:03:52-881416 DEBUG Extensions all: ['sd-webui-agent-scheduler', 'sd-extension-chainner', 'stable-diffusion-webui-rembg', 'sd-extension-system-info', 'Lora', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser']
09:03:52-882586 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/sd-webui-agent-scheduler/install.py
09:03:53-171367 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/stable-diffusion-webui-rembg/install.py
09:03:53-422364 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/sd-extension-system-info/install.py
09:03:53-719283 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/sd-webui-controlnet/install.py
09:03:53-988052 DEBUG Running extension installer: /home/sseifried/stable-diffusion-webui/extensions-builtin/stable-diffusion-webui-images-browser/install.py
09:03:54-225949 DEBUG Extensions all: []
09:03:54-226758 INFO Extensions enabled: ['sd-webui-agent-scheduler', 'sd-extension-chainner', 'stable-diffusion-webui-rembg', 'sd-extension-system-info', 'Lora', 'sd-webui-controlnet',
'stable-diffusion-webui-images-browser']
09:03:54-227637 INFO Verifying requirements
09:03:54-257527 DEBUG Setup complete without errors: 1702800234
09:03:54-260014 INFO Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
09:03:54-261191 DEBUG Starting module: <module 'webui' from '/home/sseifried/stable-diffusion-webui/webui.py'>
09:03:54-262173 INFO Command line args: ['--debug'] debug=True
/home/sseifried/stable-diffusion-webui/venv/lib/python3.11/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
09:03:56-568675 DEBUG Load IPEX==2.0.110+xpu
09:03:57-997021 INFO Load packages: torch=2.0.1a0+cxx11.abi diffusers=0.24.0 gradio=3.43.2
09:03:58-462213 DEBUG Read: file="config.json" json=14 bytes=551
09:03:58-465243 INFO Engine: backend=Backend.DIFFUSERS compute=ipex mode=no_grad device=xpu cross-optimization="Scaled-Dot-Product"
09:03:58-466637 INFO Device: device=Intel(R) Arc(TM) A750 Graphics n=1 ipex=2.0.110+xpu
2023-12-17 09:04:00.546314: I itex/core/wrapper/itex_gpu_wrapper.cc:35] Intel Extension for Tensorflow* GPU backend is loaded.
2023-12-17 09:04:00.603902: W itex/core/ops/op_init.cc:58] Op: _QuantizedMaxPool3D is already registered in Tensorflow
2023-12-17 09:04:00.620722: I itex/core/devices/gpu/itex_gpu_runtime.cc:129] Selected platform: Intel(R) Level-Zero
2023-12-17 09:04:00.621023: I itex/core/devices/gpu/itex_gpu_runtime.cc:154] number of sub-devices is zero, expose root device.
09:04:01-079984 DEBUG Entering start sequence
09:04:01-081176 DEBUG Initializing
09:04:01-082289 INFO Available VAEs: path="models/VAE" items=0
09:04:01-083127 INFO Disabling uncompatible extensions: backend=Backend.DIFFUSERS ['sd-webui-controlnet', 'multidiffusion-upscaler-for-automatic1111', 'a1111-sd-webui-lycoris',
'sd-webui-animatediff']
09:04:01-084206 DEBUG Scanning diffusers cache: models/Diffusers models/Diffusers items=0 time=0.00
09:04:01-085149 DEBUG Read: file="cache.json" json=1 bytes=185
09:04:01-086038 DEBUG Read: file="metadata.json" json=1 bytes=106
09:04:01-086830 INFO Available models: path="models/Stable-diffusion" items=1 time=0.00
09:04:01-224968 DEBUG Load extensions
09:04:02-167403 INFO Extension: script='extensions-builtin/sd-webui-agent-scheduler/scripts/task_scheduler.py' Using sqlite file: extensions-builtin/sd-webui-agent-scheduler/task_scheduler.sqlite3
09:04:02-581516 INFO Extensions time: 1.36 { Lora=0.39 sd-webui-agent-scheduler=0.50 stable-diffusion-webui-rembg=0.37 }
09:04:02-617606 DEBUG Read: file="html/upscalers.json" json=4 bytes=2640
09:04:02-618949 DEBUG Read: file="extensions-builtin/sd-extension-chainner/models.json" json=24 bytes=2693
09:04:02-620495 DEBUG chaiNNer models: path="models/chaiNNer" defined=24 discovered=0 downloaded=0
09:04:02-622316 DEBUG Load upscalers: total=52 downloaded=0 user=0 time=0.04 ['None', 'Lanczos', 'Nearest', 'ChaiNNer', 'LDSR', 'RealESRGAN', 'SCUNet', 'SwinIR', 'SD', 'ESRGAN']
09:04:02-629766 DEBUG Load styles: folder="models/styles" items=288 time=0.01
09:04:02-631960 DEBUG Creating UI
09:04:02-711769 INFO Load UI theme: name="black-teal" style=Auto base=sdnext.css
09:04:02-730964 DEBUG Read: file="html/reference.json" json=18 bytes=10921
09:04:02-740399 DEBUG Extra networks: page='model' items=19 subfolders=3 tab=txt2img folders=['models/Stable-diffusion', 'models/Diffusers', 'models/Reference',
'/home/sseifried/stable-diffusion-webui/models/Stable-diffusion'] list=0.00 desc=0.00 info=0.00 workers=2
09:04:02-749350 DEBUG Extra networks: page='style' items=288 subfolders=2 tab=txt2img folders=['models/styles', 'html'] list=0.01 desc=0.00 info=0.00 workers=2
09:04:02-750750 DEBUG Extra networks: page='embedding' items=0 subfolders=1 tab=txt2img folders=['models/embeddings'] list=0.00 desc=0.00 info=0.00 workers=2
09:04:02-752059 DEBUG Extra networks: page='hypernetwork' items=0 subfolders=1 tab=txt2img folders=['models/hypernetworks'] list=0.00 desc=0.00 info=0.00 workers=2
09:04:02-754158 DEBUG Extra networks: page='vae' items=0 subfolders=1 tab=txt2img folders=['models/VAE'] list=0.00 desc=0.00 info=0.00 workers=2
09:04:02-756272 DEBUG Extra networks: page='lora' items=0 subfolders=1 tab=txt2img folders=['models/Lora', 'models/LyCORIS'] list=0.00 desc=0.00 info=0.00 workers=2
09:04:02-907882 DEBUG Read: file="ui-config.json" json=0 bytes=2
09:04:03-015870 DEBUG Themes: builtin=6 default=5 external=55
09:04:03-602267 DEBUG Script: 0.52 ui_tabs /home/sseifried/stable-diffusion-webui/extensions-builtin/stable-diffusion-webui-images-browser/scripts/image_browser.py
09:04:03-657738 DEBUG Extension list: processed=7 installed=7 enabled=6 disabled=1 visible=7 hidden=0
09:04:03-949606 INFO Local URL: http://127.0.0.1:7860/
09:04:03-950511 DEBUG Gradio functions: registered=1599
09:04:03-951227 INFO Initializing middleware
09:04:03-954576 DEBUG Creating API
09:04:04-095682 INFO [AgentScheduler] Task queue is empty
09:04:04-096588 INFO [AgentScheduler] Registering APIs
09:04:04-196532 DEBUG Scripts setup: ['X/Y/Z Grid:0.006']
09:04:04-197473 DEBUG Model metadata: file="metadata.json" no changes
09:04:04-198235 DEBUG Model auto load disabled
09:04:04-199309 DEBUG Save: file="config.json" json=14 bytes=551
09:04:04-200150 INFO Startup time: 9.93 { torch=3.01 gradio=0.69 libraries=3.08 extensions=1.36 face-restore=0.14 ui-extra-networks=0.13 ui-settings=0.17 ui-extensions=0.61 ui-defaults=0.05
launch=0.22 api=0.08 app-started=0.16 }
09:04:12-710126 DEBUG txt2img: id_task=task(efqnn8oanydyh1b)|prompt=bee on a
flower|negative_prompt=|prompt_styles=[]|steps=20|sampler_index=0|latent_index=0|full_quality=True|restore_faces=False|tiling=False|n_iter=1|batch_size=1|cfg_scale=6|clip_skip
=1|seed=-1.0|subseed=-1.0|subseed_strength=0|seed_resize_from_h=0|seed_resize_from_w=0||height=600|width=800|enable_hr=False|denoising_strength=0.5|hr_scale=2|hr_upscaler=None
|hr_force=False|hr_second_pass_steps=20|hr_resize_x=0|hr_resize_y=0|image_cfg_scale=6|diffusers_guidance_rescale=0.7|refiner_steps=5|refiner_start=0.8|refiner_prompt=|refiner_
negative=|override_settings_texts=[]
09:04:12-713451 WARNING Selected checkpoint not found: v1-5-pruned-emaonly.safetensors
09:04:12-714437 INFO Select: model="v1-5-pruned-emaonly [6ce0161689]"
09:04:12-716114 DEBUG Load model weights: existing=False target=/home/sseifried/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors info=None
Loading model: /home/sseifried/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/4.3 GB -:--:--
09:04:12-773302 DEBUG Desired Torch parameters: dtype=BF16 no-half=False no-half-vae=False upscast=False
09:04:12-775228 INFO Setting Torch parameters: device=xpu dtype=torch.bfloat16 vae=torch.bfloat16 unet=torch.bfloat16 context=no_grad fp16=False bf16=True
09:04:12-776426 INFO Autodetect: model="Stable Diffusion" class=StableDiffusionPipeline file="/home/sseifried/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors"
size=4068MB
09:04:14-053537 DEBUG Setting model: pipeline=StableDiffusionPipeline config={'low_cpu_mem_usage': True, 'torch_dtype': torch.bfloat16, 'load_connected_pipeline': True, 'extract_ema': True,
'force_zeros_for_empty_prompt ': True, 'requires_aesthetics_score': False, 'use_safetensors': True}
09:04:14-055088 DEBUG Setting model: enable VAE slicing
09:04:14-055825 DEBUG Setting model: enable VAE tiling
09:04:14-061043 DEBUG Setting model VAE: name=None upcast=True
09:04:15-066555 INFO Load embeddings: loaded=0 skipped=0 time=0.00
09:04:15-395334 DEBUG gc: collected=11212 device=xpu {'ram': {'used': 3.8, 'total': 31.3}, 'gpu': {'used': 2.06, 'total': 7.94}, 'retries': 0, 'oom': 0}
09:04:15-404448 INFO Load model: time=2.35 { load=2.35 } native=512 {'ram': {'used': 3.8, 'total': 31.3}, 'gpu': {'used': 2.06, 'total': 7.94}, 'retries': 0, 'oom': 0}
zsh: segmentation fault (core dumped) ./webui.sh --debug
Shall I try a different oneapi basekit / compute runtime version?
These seems fine. I am assuming you've installed the Arch Linux packages in the wiki for Intel ARC.
Can you try switching to the dev branch and run the webui.sh like this:
DISABLE_IPEXRUN=1 ./webui.sh --use-ipex --debug --reinstall
To switch:
git checkout dev
SDNext with IPEX installs only the few small packages from OneAPI via pip in venv so installing the entire OneAPI basekit isn't necessary anymore, SDNext won't use the system one. IPEX & PyTorch got updated to 2.1 and OneAPI got updated to 2024.0. These are the main IPEX updates in the dev branch.
Your assumption is correct, I followed the installation routine for Arch.
Dev branch also behaves weird in my case: Diffuser backend fails right away with a segmentation fault, Original backend gives me one successful run, but afterwards fails also with a segmentation fault.
I wonder where those segmentation faults occur., and if there is a way to narrow down the problem. Is there anything I can do to get like a traceback on the segmentation fault?
Anyways, I will try to get the docker setup running instead of the native install and hope for the best.
For info on segfault, check /var/logs
stuff is happening deep inside the intel libs :see_no_evil: Probably need to create a ticket with the pytorch extensions.
this is the output from the dev branch / diffuser backend
Stack trace of thread 43744:
#0 0x00007f38ca107afc n/a (libze_intel_gpu.so.1 + 0x707afc)
#1 0x00007f38c9f52b63 n/a (libze_intel_gpu.so.1 + 0x552b63)
#2 0x00007f38c9f53194 n/a (libze_intel_gpu.so.1 + 0x553194)
#3 0x00007f38c9f6d898 n/a (libze_intel_gpu.so.1 + 0x56d898)
#4 0x00007f38c9b36737 n/a (libze_intel_gpu.so.1 + 0x136737)
#5 0x00007f38c9c4bd1f n/a (libze_intel_gpu.so.1 + 0x24bd1f)
#6 0x00007f38c9c4d559 n/a (libze_intel_gpu.so.1 + 0x24d559)
#7 0x00007f38dfc2fe58 n/a (/home/sseifried/stable-diffusion-webui/venv/lib/libpi_level_zero.so + 0xace58)
ELF object binary architecture: AMD x86-64
and this is the result of master / diffuser backend
Stack trace of thread 48673:
#0 0x00007f9a42107ddc n/a (libze_intel_gpu.so.1 + 0x707ddc)
#1 0x00007f9a41f52b63 n/a (libze_intel_gpu.so.1 + 0x552b63)
#2 0x00007f9a41f53194 n/a (libze_intel_gpu.so.1 + 0x553194)
#3 0x00007f9a41f6d898 n/a (libze_intel_gpu.so.1 + 0x56d898)
#4 0x00007f9a41b36737 n/a (libze_intel_gpu.so.1 + 0x136737)
#5 0x00007f9a41c4bd1f n/a (libze_intel_gpu.so.1 + 0x24bd1f)
#6 0x00007f9a41c4d559 n/a (libze_intel_gpu.so.1 + 0x24d559)
#7 0x00007f9a6e887a13 _ZN9_pi_queue18executeCommandListENSt3__119__hash_map_iteratorINS0_15__hash_iteratorIPNS0_11__hash_nodeINS0_17__hash_value_typeIP25_ze_command_list_handle_t22pi_command_list_info_tEEPvEEEEEEbb (libpi_level_zero.so + 0x70a13)
#8 0x00007f9a6e898eb1 piEnqueueKernelLaunch (libpi_level_zero.so + 0x81eb1)
#9 0x00007f9a7169430f _ZNK4sycl3_V16detail6plugin12call_nocheckILNS1_9PiApiKindE76EJP9_pi_queueP10_pi_kernelmPmS9_S9_mPP9_pi_eventSC_EEE10_pi_resultDpT0_ (libsycl.so.6 + 0x29430f)
#10 0x00007f9a7168b3dd _ZN4sycl3_V16detail16enqueueImpKernelERKSt10shared_ptrINS1_10queue_implEERNS1_8NDRDescTERSt6vectorINS1_7ArgDescESaISA_EERKS2_INS1_18kernel_bundle_implEERKS2_INS1_11kernel_implEERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKlRS9_IP9_pi_eventSaISX_EEPSX_RKSt8functionIFPvPNS1_16AccessorImplHostEEE23_pi_kernel_cache_config (libsycl.so.6 + 0x28b3dd)
#11 0x00007f9a716e1f32 _ZZN4sycl3_V17handler8finalizeEvENK3$_0clEv (libsycl.so.6 + 0x2e1f32)
#12 0x00007f9a716ddc4c _ZN4sycl3_V17handler8finalizeEv (libsycl.so.6 + 0x2ddc4c)
#13 0x00007f9a7170d4df _ZN4sycl3_V16detail10queue_impl15finalizeHandlerINS0_7handlerEEEvRT_RKNS1_2CG6CGTYPEERNS0_5eventE (libsycl.so.6 + 0x30d4df)
#14 0x00007f9a7170cfdc _ZN4sycl3_V16detail10queue_impl11submit_implERKSt8functionIFvRNS0_7handlerEEERKSt10shared_ptrIS2_ESD_SD_RKNS1_13code_locationEPKS3_IFvbbRNS0_5eventEEE (libsycl.so.6 + 0x30cfdc)
#15 0x00007f9a7170c406 _ZN4sycl3_V16detail10queue_impl6submitERKSt8functionIFvRNS0_7handlerEEERKSt10shared_ptrIS2_ERKNS1_13code_locationEPKS3_IFvbbRNS0_5eventEEE (libsycl.so.6 + 0x30c406)
#16 0x00007f9a7170c3c5 _ZN4sycl3_V15queue11submit_implESt8functionIFvRNS0_7handlerEEERKNS0_6detail13code_locationE (libsycl.so.6 + 0x30c3c5)
#17 0x00007f9a987086b7 n/a (/home/sseifried/stable-diffusion-webui/venv/lib/python3.11/site-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-gpu.so + 0x27086b7)
This seems more like compute runtime issue rather than ipex issue.
My level-zero-headers version:
level-zero-headers-1.14.0-1
Funny thing, didn't have level-zero-headers
installed. But now I'm on the same version. Does not make any difference though.
Anyways I didn't have any luck with docker so far. Basically the same error. @Disty0 what Kernel do you use?
Were you running docker from the start? Try running it natively, packages in the docker image was kinda old.
Try running this:
pacman -S git unzip python-pip python-virtualenv jemalloc intel-media-driver intel-oneapi-basekit intel-compute-runtime intel-graphics-compiler intel-opencl-clang
Kernel: 6.6.7-arch1-1
⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⣀⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀ disty@ArchDesktop
⠀⠀⠀⠀⠀⠀⠀⣠⠴⠒⡶⠛⡉⠈⠉⠻⣌⠉⠒⢤⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀ -----------------
⠀⠀⠀⠀⠀⣠⣾⠇⠀⡸⠁⠀⠘⣆⠀⠀⠈⠳⣄⠀⠈⢢⡀⠀⠀⠀⠀⠀⠀⠀ OS: Arch Linux x86_64
⠀⠀⠀⠀⡼⢱⠃⠀⠠⡇⠀⠀⡆⠘⡄⠀⠀⠀⠈⠳⡄⠀⠹⡄⠀⠀⠀⠀⠀⠀ Host: MS-7A37 1.0
⠀⠀⠀⡼⢣⠇⠀⠀⢀⣇⣀⣎⢹⣀⣹⣀⣀⣀⣀⣀⠘⡄⠀⢹⡀⠀⠀⠀⠀⠀ Kernel: 6.6.7-arch1-1
⠀⠀⢰⠃⣸⠒⠉⠉⠉⠉⠁⠘⠚⠀⠀⠀⠀⠀⠀⠀⢹⢻⡀⠀⢧⠀⠀⠀⠀⠀ Uptime: 2 days, 22 hours, 27 mins
⠀⠀⡜⠀⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⣇⠀⢸⠀⠀⠀⠀⠀ Packages: 2122 (pacman), 13 (flatpak)
⠀⠀⡇⠀⣿⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⡄⠸⠀⢰⡇⠀⠀⠀⠀ Shell: bash 5.2.21
⠀⢸⡃⠸⠿⠀⠀⠀⢀⣀⠀⠀⠀⠀⠀⠀⣀⠀⠀⠀⠀⡇⠀⠀⠈⡇⠀⠀⠀⠀ Resolution: 1920x1080
⠀⢸⡇⠀⢻⠀⠀⠀⠘⠛⠀⠀⠉⠁⠀⠈⠟⠁⠀⠀⠀⡇⠀⡀⢠⡇⠀⠀⠀⠀ DE: GNOME
⠀⣼⢹⠀⢸⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡇⠈⡇⠀⠀⠀⠀ WM: Mutter
⠀⢹⣼⠇⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡇⠐⡇⠀⠀⠀⠀ WM Theme: Flat-Remix-Blue-Dark-fullPanel
⠀⢸⣿⡄⣸⣧⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣗⠀⡇⠀⡇⠀⠀⠀⠀ Theme: Flat-Remix-GTK-Blue-Dark-Solid [GTK2/3]
⠀⠀⢿⣧⣿⣿⠤⢤⠤⠤⡶⢶⣤⣔⣶⣶⣶⣶⢲⢶⠲⡏⠀⡇⢠⣿⠀⠀⠀⠀ Icons: Flat-Remix-Blue-Dark [GTK2/3]
⠀⠀⠈⢻⣹⡎⡁⢸⠀⣆⣷⣼⣿⣿⣿⣿⠏⢹⣜⣼⢠⠃⠀⡇⠈⣻⡀⠀⠀⠀ Terminal: terminator
⠀⠀⠀⠀⡿⡇⢸⡈⣇⣿⣿⡿⠛⠟⠛⠁⠀⠸⢿⡿⣼⣀⡀⠁⠘⢿⡇⠀⠀⠀ CPU: AMD Ryzen 7 5800X3D (16) @ 3.400GHz
⠀⠀⠀⠀⣧⣡⣴⡟⣿⣿⣿⣆⠀⠀⠀⠀⠀⠀⣼⣧⣿⣿⣿⣿⣶⣾⣿⣀⠀⠀ GPU: Intel DG2 [Arc A770]
⠀⣠⣶⣿⣿⣟⣿⣿⣿⣿⣿⣿⣦⢖⣒⣒⡶⣴⣿⣿⣿⣻⣿⣿⣿⣿⣿⣿⣷⡀ Memory: 24697MiB / 48098MiB
⢠⣿⣿⣿⣿⣿⣿⣿⡟⠳⣯⣻⣿⣿⣿⣿⣿⣿⢟⣵⠟⠉⣼⣿⣿⣿⣿⣿⣿⡇
⠘⣿⣿⣿⣿⣿⣿⣿⣿⡀⠀⠙⠮⣿⣿⣿⣿⡿⠋⠀⠀⣼⣿⣿⣿⣿⣿⣿⣿⠁
⠀⢻⣿⣿⣿⣿⣿⣿⣿⣿⣤⣀⡀⠈⠻⡿⠋⢀⣠⣤⣾⣿⣿⣿⣿⣿⣿⣿⣿⡆
No, I did run everything natively. OK, same kernel and yeah that's the cmd-line I used to install the necessary dependencies. Not sure what's wrong with my setup. I probably need to reinstall the system, otherwise I can't figure why else the diffuser backend keeps crashing.
██████████████████ ████████ sseifried@Munin
██████████████████ ████████ ---------------
██████████████████ ████████ OS: Manjaro Linux x86_64
██████████████████ ████████ Kernel: 6.6.7-1-MANJARO
████████ ████████ Uptime: 8 hours, 15 mins
████████ ████████ ████████ Packages: 1913 (pacman)
████████ ████████ ████████ Shell: zsh 5.9
████████ ████████ ████████ Resolution: 3440x1440
████████ ████████ ████████ DE: Plasma 5.27.10
████████ ████████ ████████ WM: KWin
████████ ████████ ████████ Theme: [Plasma], Qogir-dark [GTK2/3]
████████ ████████ ████████ Icons: Tela-circle-dark [Plasma], Tela-circle-dark [GTK2/3]
████████ ████████ ████████ Terminal: konsole
████████ ████████ ████████ CPU: Intel i7-6700 (8) @ 4.000GHz
GPU: Intel DG2 [Arc A750]
Memory: 5020MiB / 32046MiB
I added DISABLE_VENV_LIBS env variable in dev branch. Can you test with DISABLE_VENV_LIBS set to 1 and use / activate the system OneAPI?
source /opt/intel/oneapi/setvars.sh
DISABLE_VENV_LIBS=1 ./webui.sh --use-ipex
I switched to Arch-Linux over the holidays. But nonetheless, issues remained the same.
That said I tested the latest changes on my current Arch Linux installation and could complete a 100 Batch run of txt2img without problems with the original backend.
However inpainting fails. But this time I actually get some readable python traceback. So this is probably a different bug. Please advise if I shall create another issue:
20:20:07-252827 DEBUG Image resize: mode=0 resolution=512x512 upscaler=None function=init
20:20:10-471654 ERROR Exception: The size of tensor a (87) must match the size of tensor b (64) at non-singleton dimension 3
20:20:10-473580 ERROR Arguments: args=('task(0i1fjnuta8rikvb)', 2.0, 'flower', '', [], None, None, {'image': <PIL.Image.Image image mode=RGBA size=700x394 at 0x7F8636300490>, 'mask':
<PIL.Image.Image image mode=RGB size=700x394 at 0x7F8636A65060>}, None, None, None, None, 20, None, 4, 0, 1, True, False, False, 1, 1, 6, 6, 0.7, 0, 0, 1, 0.5, -1.0, -1.0, 0,
0, 0, 0, 512, 512, 1, 0, 'None', 0, 32, 0, None, '', '', '', False, 4, 0.95, False, 1, 1, False, 0.6, 1, [], 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True,
50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>',
128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, '', '<p
style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 0, '', [], 0, '', [], 0, '', [],
False, True, False, False, False, False, 0, None, None, False, None, None, False, None, None, False, 50, UiControlNetUnit(enabled=False, module='none', model='None', weight=1,
image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False,
control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none',
model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1,
pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None),
UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1,
threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True,
advanced_weighting=None)) kwargs={}
20:20:10-482627 ERROR gradio call: RuntimeError
╭───────────────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────────────────────────────────────╮
│ /home/sseifried/vladmandic-webui/modules/call_queue.py:31 in f │
│ │
│ 30 │ │ │ try: │
│ ❱ 31 │ │ │ │ res = func(*args, **kwargs) │
│ 32 │ │ │ │ progress.record_results(id_task, res) │
│ │
│ /home/sseifried/vladmandic-webui/modules/img2img.py:261 in img2img │
│ │
│ 260 │ │ if processed is None: │
│ ❱ 261 │ │ │ processed = processing.process_images(p) │
│ 262 │ p.close() │
│ │
│ /home/sseifried/vladmandic-webui/modules/processing.py:776 in process_images │
│ │
│ 775 │ │ │ with context_hypertile_vae(p), context_hypertile_unet(p): │
│ ❱ 776 │ │ │ │ res = process_images_inner(p) │
│ 777 │
│ │
│ /home/sseifried/vladmandic-webui/extensions-builtin/sd-webui-controlnet/scripts/batch_hijack.py:42 in processing_process_images_hijack │
│ │
│ 41 │ │ │ # we are not in batch mode, fallback to original function │
│ ❱ 42 │ │ │ return getattr(processing, '__controlnet_original_process_images_inner')(p, │
│ 43 │
│ │
│ /home/sseifried/vladmandic-webui/modules/processing.py:916 in process_images_inner │
│ │
│ 915 │ │ │ │ with devices.without_autocast() if devices.unet_needs_upcast else device │
│ ❱ 916 │ │ │ │ │ samples_ddim = p.sample(conditioning=c, unconditional_conditioning=u │
│ 917 │ │ │ │ x_samples_ddim = [decode_first_stage(p.sd_model, samples_ddim[i:i+1].to( │
│ │
│ /home/sseifried/vladmandic-webui/modules/processing.py:1397 in sample │
│ │
│ 1396 │ │ x *= self.initial_noise_multiplier │
│ ❱ 1397 │ │ samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, u │
│ 1398 │ │ if self.mask is not None: │
│ │
│ /home/sseifried/vladmandic-webui/modules/sd_samplers_compvis.py:175 in sample_img2img │
│ │
│ 174 │ │ self.sampler.make_schedule(ddim_num_steps=steps, ddim_eta=self.eta, ddim_discret │
│ ❱ 175 │ │ x1 = self.sampler.stochastic_encode(x, torch.tensor([t_enc] * int(x.shape[0])).t │
│ 176 │
│ │
│ /home/sseifried/vladmandic-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py:115 in decorate_context │
│ │
│ 114 │ │ with ctx_factory(): │
│ ❱ 115 │ │ │ return func(*args, **kwargs) │
│ 116 │
│ │
│ /home/sseifried/vladmandic-webui/modules/unipc/sampler.py:70 in stochastic_encode │
│ │
│ 69 │ │ │
│ ❱ 70 │ │ return (sqrt_alpha_prod * x0 + sqrt_one_minus_alpha_prod * noise) │
│ 71 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
RuntimeError: The size of tensor a (87) must match the size of tensor b (64) at non-singleton dimension 3
Creating a new issue will be better. I suspect that the new issue is from the ControlNet extension. Diffusers backend has it's own ControlNet implementation, try the Diffusers backend.
Issue Description
Running
./webui.sh --debug --use-ipex
enables me to do a couple of successful runs, but then it crashes with a SIGSEGV. Being naive, I would say I'm running into some kind of memory issue, since output image size and batch size deterministically determine how many successful runs I'm able to do.Version Platform Description
Version: app: SD.next updated: 2023-12-17 hash: 83785628 url: https://github.com/vladmandic/automatic/tree/master
Platform: arch: x86_64 system: Linux release: 6.6.7-1-MANJARO python: 3.11.6
GPU: device: Intel(R) Arc(TM) A750 Graphics (1) ipex: 2.0.110+xpu
Browser Opera One(Version: 105.0.4970.48) Chromium-Version:119.0.6045.199
Relevant log output
Backend
Original
Branch
Master
Model
SD 1.5
Acknowledgements